Test Report: KVM_Linux 21830

                    
                      3aa0d58a4eff13dd9d5f058e659508fb4ffd2206:2025-11-01:42156
                    
                

Test fail (4/364)

Order failed test Duration
44 TestAddons/parallel/LocalPath 230.17
90 TestFunctional/parallel/DashboardCmd 301.77
99 TestFunctional/parallel/PersistentVolumeClaim 370.74
103 TestFunctional/parallel/MySQL 602.18
x
+
TestAddons/parallel/LocalPath (230.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-171954 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-171954 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [f05c6362-0a7c-45d4-9050-a90af7299c9f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:337: TestAddons/parallel/LocalPath: WARNING: pod list for "default" "run=test-local-path" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:962: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:962: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-171954 -n addons-171954
addons_test.go:962: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2025-11-01 09:44:23.733391782 +0000 UTC m=+506.798935409
addons_test.go:962: (dbg) Run:  kubectl --context addons-171954 describe po test-local-path -n default
addons_test.go:962: (dbg) kubectl --context addons-171954 describe po test-local-path -n default:
Name:             test-local-path
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-171954/192.168.39.221
Start Time:       Sat, 01 Nov 2025 09:41:23 +0000
Labels:           run=test-local-path
Annotations:      <none>
Status:           Pending
IP:               10.244.0.35
IPs:
IP:  10.244.0.35
Containers:
busybox:
Container ID:  
Image:         busybox:stable
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t9m9z (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  test-pvc
ReadOnly:   false
kube-api-access-t9m9z:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  3m                     default-scheduler  Successfully assigned default/test-local-path to addons-171954
Warning  Failed     2m18s (x3 over 2m58s)  kubelet            Failed to pull image "busybox:stable": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    86s (x4 over 2m59s)    kubelet            Pulling image "busybox:stable"
Warning  Failed     85s (x4 over 2m58s)    kubelet            Error: ErrImagePull
Warning  Failed     85s                    kubelet            Failed to pull image "busybox:stable": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    7s (x11 over 2m58s)    kubelet            Back-off pulling image "busybox:stable"
Warning  Failed     7s (x11 over 2m58s)    kubelet            Error: ImagePullBackOff
addons_test.go:962: (dbg) Run:  kubectl --context addons-171954 logs test-local-path -n default
addons_test.go:962: (dbg) Non-zero exit: kubectl --context addons-171954 logs test-local-path -n default: exit status 1 (82.172203ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:962: kubectl --context addons-171954 logs test-local-path -n default: exit status 1
addons_test.go:963: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-171954 -n addons-171954
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 logs -n 25
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                     ARGS                                                                                                                                                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-238613                                                                                                                                                                                                                                                                                                                                                                                                                       │ download-only-238613 │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ start   │ --download-only -p binary-mirror-218698 --alsologtostderr --binary-mirror http://127.0.0.1:38615 --driver=kvm2                                                                                                                                                                                                                                                                                                                                │ binary-mirror-218698 │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	│ delete  │ -p binary-mirror-218698                                                                                                                                                                                                                                                                                                                                                                                                                       │ binary-mirror-218698 │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ addons  │ disable dashboard -p addons-171954                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	│ addons  │ enable dashboard -p addons-171954                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	│ start   │ -p addons-171954 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:39 UTC │
	│ addons  │ addons-171954 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                   │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:40 UTC │ 01 Nov 25 09:40 UTC │
	│ addons  │ addons-171954 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:40 UTC │ 01 Nov 25 09:40 UTC │
	│ addons  │ enable headlamp -p addons-171954 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                       │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:40 UTC │ 01 Nov 25 09:41 UTC │
	│ addons  │ addons-171954 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                            │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ addons  │ addons-171954 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                      │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ addons-171954 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                      │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ip      │ addons-171954 ip                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ addons  │ addons-171954 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                               │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ addons  │ addons-171954 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                   │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ip      │ addons-171954 ip                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ addons  │ addons-171954 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ addons  │ addons-171954 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ addons  │ addons-171954 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                      │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-171954                                                                                                                                                                                                                                                                                                                                                                │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ addons  │ addons-171954 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                            │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ addons  │ addons-171954 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                             │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ addons  │ addons-171954 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                          │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ addons  │ addons-171954 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                           │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ addons  │ addons-171954 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                       │ addons-171954        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:36:30
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:36:30.245288  469001 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:36:30.245406  469001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:36:30.245411  469001 out.go:374] Setting ErrFile to fd 2...
	I1101 09:36:30.245415  469001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:36:30.245598  469001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
	I1101 09:36:30.246110  469001 out.go:368] Setting JSON to false
	I1101 09:36:30.246991  469001 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4729,"bootTime":1761985061,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:36:30.247080  469001 start.go:143] virtualization: kvm guest
	I1101 09:36:30.248896  469001 out.go:179] * [addons-171954] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:36:30.250151  469001 notify.go:221] Checking for updates...
	I1101 09:36:30.250185  469001 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 09:36:30.251485  469001 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:36:30.252569  469001 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig
	I1101 09:36:30.253762  469001 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube
	I1101 09:36:30.254900  469001 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:36:30.256379  469001 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:36:30.257616  469001 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:36:30.288152  469001 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 09:36:30.289090  469001 start.go:309] selected driver: kvm2
	I1101 09:36:30.289105  469001 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:36:30.289118  469001 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:36:30.289849  469001 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:36:30.290103  469001 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:36:30.290146  469001 cni.go:84] Creating CNI manager for ""
	I1101 09:36:30.290203  469001 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 09:36:30.290220  469001 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 09:36:30.290267  469001 start.go:353] cluster config:
	{Name:addons-171954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-171954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPU
s: AutoPauseInterval:1m0s}
	I1101 09:36:30.290372  469001 iso.go:125] acquiring lock: {Name:mk3fea4fe84098591e9ecbbeb78880fff096fc58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:36:30.291878  469001 out.go:179] * Starting "addons-171954" primary control-plane node in "addons-171954" cluster
	I1101 09:36:30.292902  469001 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1101 09:36:30.292951  469001 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-464466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1101 09:36:30.292962  469001 cache.go:59] Caching tarball of preloaded images
	I1101 09:36:30.293048  469001 preload.go:233] Found /home/jenkins/minikube-integration/21830-464466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 09:36:30.293059  469001 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1101 09:36:30.293370  469001 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/config.json ...
	I1101 09:36:30.293393  469001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/config.json: {Name:mkefbc0d55f77d17ed1abed56cabdadbcbd7a574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:30.293517  469001 start.go:360] acquireMachinesLock for addons-171954: {Name:mk1b2235e3d206d2962ad67e5caadc1e79068c43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 09:36:30.293559  469001 start.go:364] duration metric: took 30.414µs to acquireMachinesLock for "addons-171954"
	I1101 09:36:30.293576  469001 start.go:93] Provisioning new machine with config: &{Name:addons-171954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-171954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1101 09:36:30.293628  469001 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 09:36:30.295864  469001 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1101 09:36:30.296042  469001 start.go:159] libmachine.API.Create for "addons-171954" (driver="kvm2")
	I1101 09:36:30.296072  469001 client.go:173] LocalClient.Create starting
	I1101 09:36:30.296165  469001 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21830-464466/.minikube/certs/ca.pem
	I1101 09:36:30.655704  469001 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21830-464466/.minikube/certs/cert.pem
	I1101 09:36:30.895161  469001 main.go:143] libmachine: creating domain...
	I1101 09:36:30.895185  469001 main.go:143] libmachine: creating network...
	I1101 09:36:30.896998  469001 main.go:143] libmachine: found existing default network
	I1101 09:36:30.897301  469001 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:36:30.898019  469001 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed6020}
	I1101 09:36:30.898117  469001 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-171954</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:36:30.904075  469001 main.go:143] libmachine: creating private network mk-addons-171954 192.168.39.0/24...
	I1101 09:36:30.971690  469001 main.go:143] libmachine: private network mk-addons-171954 192.168.39.0/24 created
	I1101 09:36:30.971994  469001 main.go:143] libmachine: <network>
	  <name>mk-addons-171954</name>
	  <uuid>f9b95af3-7a49-4fa1-8304-b0637b1a97a1</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:0c:e9:ef'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:36:30.972029  469001 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954 ...
	I1101 09:36:30.972071  469001 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21830-464466/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 09:36:30.972082  469001 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21830-464466/.minikube
	I1101 09:36:30.972144  469001 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21830-464466/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21830-464466/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 09:36:31.251760  469001 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa...
	I1101 09:36:31.421796  469001 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/addons-171954.rawdisk...
	I1101 09:36:31.421854  469001 main.go:143] libmachine: Writing magic tar header
	I1101 09:36:31.421881  469001 main.go:143] libmachine: Writing SSH key tar header
	I1101 09:36:31.422002  469001 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954 ...
	I1101 09:36:31.422063  469001 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954
	I1101 09:36:31.422090  469001 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954 (perms=drwx------)
	I1101 09:36:31.422099  469001 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-464466/.minikube/machines
	I1101 09:36:31.422111  469001 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-464466/.minikube/machines (perms=drwxr-xr-x)
	I1101 09:36:31.422123  469001 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-464466/.minikube
	I1101 09:36:31.422135  469001 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-464466/.minikube (perms=drwxr-xr-x)
	I1101 09:36:31.422144  469001 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-464466
	I1101 09:36:31.422154  469001 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-464466 (perms=drwxrwxr-x)
	I1101 09:36:31.422163  469001 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 09:36:31.422173  469001 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 09:36:31.422182  469001 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 09:36:31.422191  469001 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 09:36:31.422200  469001 main.go:143] libmachine: checking permissions on dir: /home
	I1101 09:36:31.422206  469001 main.go:143] libmachine: skipping /home - not owner
	I1101 09:36:31.422213  469001 main.go:143] libmachine: defining domain...
	I1101 09:36:31.423420  469001 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-171954</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/addons-171954.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-171954'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:36:31.430969  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:a4:e7:fa in network default
	I1101 09:36:31.431595  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:31.431610  469001 main.go:143] libmachine: starting domain...
	I1101 09:36:31.431615  469001 main.go:143] libmachine: ensuring networks are active...
	I1101 09:36:31.432415  469001 main.go:143] libmachine: Ensuring network default is active
	I1101 09:36:31.432842  469001 main.go:143] libmachine: Ensuring network mk-addons-171954 is active
	I1101 09:36:31.433631  469001 main.go:143] libmachine: getting domain XML...
	I1101 09:36:31.434764  469001 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-171954</name>
	  <uuid>27488242-efd5-4e91-97bb-46360de477f6</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/addons-171954.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:09:d6:7e'/>
	      <source network='mk-addons-171954'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:a4:e7:fa'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:36:32.742163  469001 main.go:143] libmachine: waiting for domain to start...
	I1101 09:36:32.743558  469001 main.go:143] libmachine: domain is now running
	I1101 09:36:32.743583  469001 main.go:143] libmachine: waiting for IP...
	I1101 09:36:32.744336  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:32.744793  469001 main.go:143] libmachine: no network interface addresses found for domain addons-171954 (source=lease)
	I1101 09:36:32.744824  469001 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:36:32.745123  469001 main.go:143] libmachine: unable to find current IP address of domain addons-171954 in network mk-addons-171954 (interfaces detected: [])
	I1101 09:36:32.745181  469001 retry.go:31] will retry after 296.108954ms: waiting for domain to come up
	I1101 09:36:33.042981  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:33.043571  469001 main.go:143] libmachine: no network interface addresses found for domain addons-171954 (source=lease)
	I1101 09:36:33.043605  469001 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:36:33.043937  469001 main.go:143] libmachine: unable to find current IP address of domain addons-171954 in network mk-addons-171954 (interfaces detected: [])
	I1101 09:36:33.043995  469001 retry.go:31] will retry after 271.675856ms: waiting for domain to come up
	I1101 09:36:33.317534  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:33.318090  469001 main.go:143] libmachine: no network interface addresses found for domain addons-171954 (source=lease)
	I1101 09:36:33.318108  469001 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:36:33.318562  469001 main.go:143] libmachine: unable to find current IP address of domain addons-171954 in network mk-addons-171954 (interfaces detected: [])
	I1101 09:36:33.318607  469001 retry.go:31] will retry after 424.343789ms: waiting for domain to come up
	I1101 09:36:33.744593  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:33.745185  469001 main.go:143] libmachine: no network interface addresses found for domain addons-171954 (source=lease)
	I1101 09:36:33.745207  469001 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:36:33.745525  469001 main.go:143] libmachine: unable to find current IP address of domain addons-171954 in network mk-addons-171954 (interfaces detected: [])
	I1101 09:36:33.745576  469001 retry.go:31] will retry after 463.405267ms: waiting for domain to come up
	I1101 09:36:34.210352  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:34.211028  469001 main.go:143] libmachine: no network interface addresses found for domain addons-171954 (source=lease)
	I1101 09:36:34.211048  469001 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:36:34.211369  469001 main.go:143] libmachine: unable to find current IP address of domain addons-171954 in network mk-addons-171954 (interfaces detected: [])
	I1101 09:36:34.211412  469001 retry.go:31] will retry after 615.539074ms: waiting for domain to come up
	I1101 09:36:34.828501  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:34.829143  469001 main.go:143] libmachine: no network interface addresses found for domain addons-171954 (source=lease)
	I1101 09:36:34.829180  469001 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:36:34.829522  469001 main.go:143] libmachine: unable to find current IP address of domain addons-171954 in network mk-addons-171954 (interfaces detected: [])
	I1101 09:36:34.829584  469001 retry.go:31] will retry after 683.523221ms: waiting for domain to come up
	I1101 09:36:35.514456  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:35.515044  469001 main.go:143] libmachine: no network interface addresses found for domain addons-171954 (source=lease)
	I1101 09:36:35.515067  469001 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:36:35.515380  469001 main.go:143] libmachine: unable to find current IP address of domain addons-171954 in network mk-addons-171954 (interfaces detected: [])
	I1101 09:36:35.515428  469001 retry.go:31] will retry after 1.122275158s: waiting for domain to come up
	I1101 09:36:36.639750  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:36.640303  469001 main.go:143] libmachine: no network interface addresses found for domain addons-171954 (source=lease)
	I1101 09:36:36.640323  469001 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:36:36.640666  469001 main.go:143] libmachine: unable to find current IP address of domain addons-171954 in network mk-addons-171954 (interfaces detected: [])
	I1101 09:36:36.640711  469001 retry.go:31] will retry after 1.206054202s: waiting for domain to come up
	I1101 09:36:37.849188  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:37.849728  469001 main.go:143] libmachine: no network interface addresses found for domain addons-171954 (source=lease)
	I1101 09:36:37.849746  469001 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:36:37.850148  469001 main.go:143] libmachine: unable to find current IP address of domain addons-171954 in network mk-addons-171954 (interfaces detected: [])
	I1101 09:36:37.850190  469001 retry.go:31] will retry after 1.814243382s: waiting for domain to come up
	I1101 09:36:39.667479  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:39.668084  469001 main.go:143] libmachine: no network interface addresses found for domain addons-171954 (source=lease)
	I1101 09:36:39.668102  469001 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:36:39.668466  469001 main.go:143] libmachine: unable to find current IP address of domain addons-171954 in network mk-addons-171954 (interfaces detected: [])
	I1101 09:36:39.668515  469001 retry.go:31] will retry after 2.091053725s: waiting for domain to come up
	I1101 09:36:41.761952  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:41.762486  469001 main.go:143] libmachine: no network interface addresses found for domain addons-171954 (source=lease)
	I1101 09:36:41.762510  469001 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:36:41.762921  469001 main.go:143] libmachine: unable to find current IP address of domain addons-171954 in network mk-addons-171954 (interfaces detected: [])
	I1101 09:36:41.762985  469001 retry.go:31] will retry after 2.390988361s: waiting for domain to come up
	I1101 09:36:44.155965  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:44.156447  469001 main.go:143] libmachine: no network interface addresses found for domain addons-171954 (source=lease)
	I1101 09:36:44.156468  469001 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:36:44.156862  469001 main.go:143] libmachine: unable to find current IP address of domain addons-171954 in network mk-addons-171954 (interfaces detected: [])
	I1101 09:36:44.156961  469001 retry.go:31] will retry after 3.089905159s: waiting for domain to come up
	I1101 09:36:47.249421  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.249948  469001 main.go:143] libmachine: domain addons-171954 has current primary IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.249970  469001 main.go:143] libmachine: found domain IP: 192.168.39.221
	I1101 09:36:47.249980  469001 main.go:143] libmachine: reserving static IP address...
	I1101 09:36:47.250370  469001 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-171954", mac: "52:54:00:09:d6:7e", ip: "192.168.39.221"} in network mk-addons-171954
	I1101 09:36:47.461454  469001 main.go:143] libmachine: reserved static IP address 192.168.39.221 for domain addons-171954
	I1101 09:36:47.461479  469001 main.go:143] libmachine: waiting for SSH...
	I1101 09:36:47.461485  469001 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 09:36:47.464494  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.464895  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:minikube Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:47.464922  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.465120  469001 main.go:143] libmachine: Using SSH client type: native
	I1101 09:36:47.465362  469001 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1101 09:36:47.465375  469001 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 09:36:47.583022  469001 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:36:47.583384  469001 main.go:143] libmachine: domain creation complete
	I1101 09:36:47.585212  469001 machine.go:94] provisionDockerMachine start ...
	I1101 09:36:47.587347  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.587721  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:47.587747  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.587916  469001 main.go:143] libmachine: Using SSH client type: native
	I1101 09:36:47.588109  469001 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1101 09:36:47.588119  469001 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:36:47.705552  469001 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 09:36:47.705584  469001 buildroot.go:166] provisioning hostname "addons-171954"
	I1101 09:36:47.708762  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.709259  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:47.709287  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.709457  469001 main.go:143] libmachine: Using SSH client type: native
	I1101 09:36:47.709675  469001 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1101 09:36:47.709691  469001 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-171954 && echo "addons-171954" | sudo tee /etc/hostname
	I1101 09:36:47.844313  469001 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-171954
	
	I1101 09:36:47.847347  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.847771  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:47.847792  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.847986  469001 main.go:143] libmachine: Using SSH client type: native
	I1101 09:36:47.848214  469001 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1101 09:36:47.848233  469001 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-171954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-171954/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-171954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:36:47.975050  469001 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:36:47.975085  469001 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-464466/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-464466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-464466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-464466/.minikube}
	I1101 09:36:47.975109  469001 buildroot.go:174] setting up certificates
	I1101 09:36:47.975120  469001 provision.go:84] configureAuth start
	I1101 09:36:47.978139  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.978551  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:47.978571  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.981164  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.981534  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:47.981557  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:47.981667  469001 provision.go:143] copyHostCerts
	I1101 09:36:47.981746  469001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-464466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-464466/.minikube/cert.pem (1123 bytes)
	I1101 09:36:47.981886  469001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-464466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-464466/.minikube/key.pem (1675 bytes)
	I1101 09:36:47.981953  469001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-464466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-464466/.minikube/ca.pem (1082 bytes)
	I1101 09:36:47.982000  469001 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-464466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-464466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-464466/.minikube/certs/ca-key.pem org=jenkins.addons-171954 san=[127.0.0.1 192.168.39.221 addons-171954 localhost minikube]
	I1101 09:36:48.006979  469001 provision.go:177] copyRemoteCerts
	I1101 09:36:48.007042  469001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:36:48.009695  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:48.010039  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:48.010061  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:48.010208  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:36:48.099470  469001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-464466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:36:48.133716  469001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-464466/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:36:48.163278  469001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-464466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:36:48.192419  469001 provision.go:87] duration metric: took 217.278965ms to configureAuth
	I1101 09:36:48.192467  469001 buildroot.go:189] setting minikube options for container-runtime
	I1101 09:36:48.192669  469001 config.go:182] Loaded profile config "addons-171954": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1101 09:36:48.195455  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:48.195940  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:48.195968  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:48.196194  469001 main.go:143] libmachine: Using SSH client type: native
	I1101 09:36:48.196482  469001 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1101 09:36:48.196498  469001 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 09:36:48.316892  469001 main.go:143] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1101 09:36:48.316919  469001 buildroot.go:70] root file system type: tmpfs
	I1101 09:36:48.317051  469001 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 09:36:48.320168  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:48.320652  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:48.320678  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:48.320910  469001 main.go:143] libmachine: Using SSH client type: native
	I1101 09:36:48.321177  469001 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1101 09:36:48.321223  469001 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 09:36:48.455016  469001 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 09:36:48.458035  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:48.458516  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:48.458541  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:48.458742  469001 main.go:143] libmachine: Using SSH client type: native
	I1101 09:36:48.458988  469001 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1101 09:36:48.459008  469001 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 09:36:49.376013  469001 main.go:143] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I1101 09:36:49.376048  469001 machine.go:97] duration metric: took 1.790799381s to provisionDockerMachine
	I1101 09:36:49.376061  469001 client.go:176] duration metric: took 19.079984405s to LocalClient.Create
	I1101 09:36:49.376099  469001 start.go:167] duration metric: took 19.080037284s to libmachine.API.Create "addons-171954"
	I1101 09:36:49.376114  469001 start.go:293] postStartSetup for "addons-171954" (driver="kvm2")
	I1101 09:36:49.376123  469001 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:36:49.376196  469001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:36:49.379134  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:49.379578  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:49.379602  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:49.379769  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:36:49.469562  469001 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:36:49.474397  469001 info.go:137] Remote host: Buildroot 2025.02
	I1101 09:36:49.474437  469001 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-464466/.minikube/addons for local assets ...
	I1101 09:36:49.474536  469001 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-464466/.minikube/files for local assets ...
	I1101 09:36:49.474567  469001 start.go:296] duration metric: took 98.447977ms for postStartSetup
	I1101 09:36:49.477414  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:49.477799  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:49.477857  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:49.478090  469001 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/config.json ...
	I1101 09:36:49.478283  469001 start.go:128] duration metric: took 19.184638849s to createHost
	I1101 09:36:49.480353  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:49.480661  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:49.480687  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:49.480838  469001 main.go:143] libmachine: Using SSH client type: native
	I1101 09:36:49.481021  469001 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1101 09:36:49.481030  469001 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 09:36:49.597482  469001 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761989809.558232881
	
	I1101 09:36:49.597510  469001 fix.go:216] guest clock: 1761989809.558232881
	I1101 09:36:49.597520  469001 fix.go:229] Guest: 2025-11-01 09:36:49.558232881 +0000 UTC Remote: 2025-11-01 09:36:49.478294641 +0000 UTC m=+19.284432325 (delta=79.93824ms)
	I1101 09:36:49.597536  469001 fix.go:200] guest clock delta is within tolerance: 79.93824ms
	I1101 09:36:49.597542  469001 start.go:83] releasing machines lock for "addons-171954", held for 19.303974056s
	I1101 09:36:49.600762  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:49.601435  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:49.601478  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:49.602152  469001 ssh_runner.go:195] Run: cat /version.json
	I1101 09:36:49.602183  469001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:36:49.605455  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:49.605818  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:49.606020  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:49.606055  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:49.606258  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:49.606296  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:49.606345  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:36:49.606528  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:36:49.689276  469001 ssh_runner.go:195] Run: systemctl --version
	I1101 09:36:49.730104  469001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:36:49.736643  469001 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:36:49.736716  469001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:36:49.756574  469001 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:36:49.756621  469001 start.go:496] detecting cgroup driver to use...
	I1101 09:36:49.756838  469001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:36:49.779001  469001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1101 09:36:49.791329  469001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1101 09:36:49.804917  469001 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1101 09:36:49.804990  469001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1101 09:36:49.817619  469001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 09:36:49.830175  469001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1101 09:36:49.842852  469001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 09:36:49.855059  469001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:36:49.867606  469001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1101 09:36:49.880413  469001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1101 09:36:49.892638  469001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1101 09:36:49.905549  469001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:36:49.916010  469001 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 09:36:49.916083  469001 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 09:36:49.929354  469001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:36:49.941056  469001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:36:50.083143  469001 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 09:36:50.127318  469001 start.go:496] detecting cgroup driver to use...
	I1101 09:36:50.127413  469001 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 09:36:50.145118  469001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:36:50.167237  469001 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:36:50.195233  469001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:36:50.212434  469001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 09:36:50.229010  469001 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 09:36:50.611257  469001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 09:36:50.633139  469001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:36:50.657710  469001 ssh_runner.go:195] Run: which cri-dockerd
	I1101 09:36:50.662443  469001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1101 09:36:50.674786  469001 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1101 09:36:50.695652  469001 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 09:36:50.848361  469001 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 09:36:51.000889  469001 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1101 09:36:51.001037  469001 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1101 09:36:51.022821  469001 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1101 09:36:51.040187  469001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:36:51.189565  469001 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 09:36:51.639964  469001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:36:51.660869  469001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1101 09:36:51.676742  469001 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1101 09:36:51.697781  469001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1101 09:36:51.713367  469001 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1101 09:36:51.853039  469001 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 09:36:52.006166  469001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:36:52.147471  469001 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1101 09:36:52.184171  469001 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1101 09:36:52.199081  469001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:36:52.342299  469001 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1101 09:36:52.446501  469001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1101 09:36:52.463482  469001 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1101 09:36:52.463588  469001 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1101 09:36:52.469850  469001 start.go:564] Will wait 60s for crictl version
	I1101 09:36:52.469939  469001 ssh_runner.go:195] Run: which crictl
	I1101 09:36:52.474338  469001 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 09:36:52.513368  469001 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.1
	RuntimeApiVersion:  v1
	I1101 09:36:52.513461  469001 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 09:36:52.542540  469001 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 09:36:52.568654  469001 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.1 ...
	I1101 09:36:52.571517  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:52.571970  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:36:52.572006  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:36:52.572279  469001 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 09:36:52.576791  469001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:36:52.592131  469001 kubeadm.go:884] updating cluster {Name:addons-171954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-171954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:36:52.592259  469001 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1101 09:36:52.592325  469001 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 09:36:52.609197  469001 docker.go:691] Got preloaded images: 
	I1101 09:36:52.609221  469001 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.1 wasn't preloaded
	I1101 09:36:52.609276  469001 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1101 09:36:52.621552  469001 ssh_runner.go:195] Run: which lz4
	I1101 09:36:52.626483  469001 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 09:36:52.632065  469001 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 09:36:52.632103  469001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-464466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (353378914 bytes)
	I1101 09:36:53.849057  469001 docker.go:655] duration metric: took 1.222587547s to copy over tarball
	I1101 09:36:53.849187  469001 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 09:36:55.152678  469001 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.303446478s)
	I1101 09:36:55.152723  469001 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 09:36:55.197074  469001 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1101 09:36:55.210907  469001 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I1101 09:36:55.235304  469001 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1101 09:36:55.251698  469001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:36:55.402780  469001 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 09:36:57.753698  469001 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.350874115s)
	I1101 09:36:57.753835  469001 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 09:36:57.773712  469001 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1101 09:36:57.773745  469001 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:36:57.773756  469001 kubeadm.go:935] updating node { 192.168.39.221 8443 v1.34.1 docker true true} ...
	I1101 09:36:57.773920  469001 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-171954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-171954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:36:57.774012  469001 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 09:36:57.825913  469001 cni.go:84] Creating CNI manager for ""
	I1101 09:36:57.825980  469001 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 09:36:57.826011  469001 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:36:57.826045  469001 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.221 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-171954 NodeName:addons-171954 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:36:57.826207  469001 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-171954"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.221"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.221"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:36:57.826305  469001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:36:57.838261  469001 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:36:57.838357  469001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:36:57.850004  469001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1101 09:36:57.870185  469001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:36:57.890403  469001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1101 09:36:57.910900  469001 ssh_runner.go:195] Run: grep 192.168.39.221	control-plane.minikube.internal$ /etc/hosts
	I1101 09:36:57.915440  469001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:36:57.932004  469001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:36:58.090910  469001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:36:58.126474  469001 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954 for IP: 192.168.39.221
	I1101 09:36:58.126511  469001 certs.go:195] generating shared ca certs ...
	I1101 09:36:58.126535  469001 certs.go:227] acquiring lock for ca certs: {Name:mk1e80d860f3397dfd4b0bafae355f6828d4adfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:58.126735  469001 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-464466/.minikube/ca.key
	I1101 09:36:58.246605  469001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-464466/.minikube/ca.crt ...
	I1101 09:36:58.246643  469001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-464466/.minikube/ca.crt: {Name:mkf79ca639217e08872aa4e6d54730870ceaa494 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:58.246883  469001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-464466/.minikube/ca.key ...
	I1101 09:36:58.246903  469001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-464466/.minikube/ca.key: {Name:mkd2b03d97e83748bdb130bfce839b8e3f6a321e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:58.247035  469001 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-464466/.minikube/proxy-client-ca.key
	I1101 09:36:58.800657  469001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-464466/.minikube/proxy-client-ca.crt ...
	I1101 09:36:58.800694  469001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-464466/.minikube/proxy-client-ca.crt: {Name:mk4ff95238b91b41b98e73c33899a595dbe87d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:58.800919  469001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-464466/.minikube/proxy-client-ca.key ...
	I1101 09:36:58.800938  469001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-464466/.minikube/proxy-client-ca.key: {Name:mk74383efee590c8caf3c5ade64ee1b38f3055d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:58.801047  469001 certs.go:257] generating profile certs ...
	I1101 09:36:58.801124  469001 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.key
	I1101 09:36:58.801144  469001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt with IP's: []
	I1101 09:36:59.042798  469001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt ...
	I1101 09:36:59.042844  469001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: {Name:mkca9f4ce281f2ee3b734c5641eb03c8629c652a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:59.043050  469001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.key ...
	I1101 09:36:59.043069  469001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.key: {Name:mk2a65bd7c6f8ee97cb348d0ef2f8ea8b0a6a2de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:59.043178  469001 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/apiserver.key.79765a49
	I1101 09:36:59.043204  469001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/apiserver.crt.79765a49 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.221]
	I1101 09:36:59.069581  469001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/apiserver.crt.79765a49 ...
	I1101 09:36:59.069640  469001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/apiserver.crt.79765a49: {Name:mkdd08c792d3df0d6ffbc7d3ba860e3bc76cee74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:59.069834  469001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/apiserver.key.79765a49 ...
	I1101 09:36:59.069853  469001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/apiserver.key.79765a49: {Name:mk53df539e4ea877eb39b040a02cba45858f8d24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:59.069970  469001 certs.go:382] copying /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/apiserver.crt.79765a49 -> /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/apiserver.crt
	I1101 09:36:59.070099  469001 certs.go:386] copying /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/apiserver.key.79765a49 -> /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/apiserver.key
	I1101 09:36:59.070188  469001 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/proxy-client.key
	I1101 09:36:59.070217  469001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/proxy-client.crt with IP's: []
	I1101 09:36:59.725520  469001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/proxy-client.crt ...
	I1101 09:36:59.725553  469001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/proxy-client.crt: {Name:mka6733d3a6d27adc595e20b6c2caac9bea90536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:59.725761  469001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/proxy-client.key ...
	I1101 09:36:59.725781  469001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/proxy-client.key: {Name:mk4d618554cc0138b07ad54a61f886506370d119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:59.726041  469001 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-464466/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:36:59.726087  469001 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-464466/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:36:59.726127  469001 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-464466/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:36:59.726157  469001 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-464466/.minikube/certs/key.pem (1675 bytes)
	I1101 09:36:59.726769  469001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-464466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:36:59.757151  469001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-464466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:36:59.786084  469001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-464466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:36:59.814300  469001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-464466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:36:59.842640  469001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:36:59.870306  469001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:36:59.897960  469001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:36:59.926818  469001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:36:59.955565  469001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-464466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:36:59.983653  469001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:37:00.003645  469001 ssh_runner.go:195] Run: openssl version
	I1101 09:37:00.010284  469001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:37:00.023341  469001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:37:00.028369  469001 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:36 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:37:00.028437  469001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:37:00.035390  469001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:37:00.048002  469001 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:37:00.052722  469001 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:37:00.052794  469001 kubeadm.go:401] StartCluster: {Name:addons-171954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-171954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:37:00.052938  469001 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 09:37:00.070510  469001 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:37:00.082984  469001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:37:00.095212  469001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:37:00.106828  469001 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:37:00.106853  469001 kubeadm.go:158] found existing configuration files:
	
	I1101 09:37:00.106908  469001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:37:00.118157  469001 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:37:00.118243  469001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:37:00.130082  469001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:37:00.140864  469001 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:37:00.140944  469001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:37:00.152443  469001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:37:00.163609  469001 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:37:00.163702  469001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:37:00.175061  469001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:37:00.185945  469001 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:37:00.186020  469001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:37:00.197399  469001 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 09:37:00.351518  469001 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:37:12.262131  469001 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:37:12.262207  469001 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:37:12.262283  469001 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:37:12.262418  469001 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:37:12.262543  469001 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:37:12.262609  469001 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:37:12.264715  469001 out.go:252]   - Generating certificates and keys ...
	I1101 09:37:12.264818  469001 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:37:12.264873  469001 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:37:12.264934  469001 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:37:12.264988  469001 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:37:12.265041  469001 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:37:12.265081  469001 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:37:12.265165  469001 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:37:12.265327  469001 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-171954 localhost] and IPs [192.168.39.221 127.0.0.1 ::1]
	I1101 09:37:12.265425  469001 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:37:12.265577  469001 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-171954 localhost] and IPs [192.168.39.221 127.0.0.1 ::1]
	I1101 09:37:12.265668  469001 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:37:12.265758  469001 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:37:12.265848  469001 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:37:12.265949  469001 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:37:12.266020  469001 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:37:12.266111  469001 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:37:12.266191  469001 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:37:12.266287  469001 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:37:12.266364  469001 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:37:12.266458  469001 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:37:12.266523  469001 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:37:12.268012  469001 out.go:252]   - Booting up control plane ...
	I1101 09:37:12.268120  469001 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:37:12.268246  469001 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:37:12.268346  469001 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:37:12.268464  469001 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:37:12.268585  469001 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:37:12.268731  469001 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:37:12.268868  469001 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:37:12.268929  469001 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:37:12.269130  469001 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:37:12.269274  469001 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:37:12.269373  469001 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.847083ms
	I1101 09:37:12.269497  469001 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:37:12.269601  469001 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.221:8443/livez
	I1101 09:37:12.269736  469001 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:37:12.269836  469001 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:37:12.269953  469001 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.734469277s
	I1101 09:37:12.270046  469001 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.968340128s
	I1101 09:37:12.270145  469001 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003297071s
	I1101 09:37:12.270293  469001 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:37:12.270417  469001 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:37:12.270467  469001 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:37:12.270639  469001 kubeadm.go:319] [mark-control-plane] Marking the node addons-171954 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:37:12.270726  469001 kubeadm.go:319] [bootstrap-token] Using token: l1x5vv.tpepz1lanbv8m0r3
	I1101 09:37:12.272284  469001 out.go:252]   - Configuring RBAC rules ...
	I1101 09:37:12.272406  469001 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:37:12.272503  469001 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:37:12.272677  469001 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:37:12.272871  469001 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:37:12.272989  469001 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:37:12.273098  469001 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:37:12.273222  469001 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:37:12.273260  469001 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:37:12.273298  469001 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:37:12.273304  469001 kubeadm.go:319] 
	I1101 09:37:12.273350  469001 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:37:12.273355  469001 kubeadm.go:319] 
	I1101 09:37:12.273414  469001 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:37:12.273419  469001 kubeadm.go:319] 
	I1101 09:37:12.273439  469001 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:37:12.273497  469001 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:37:12.273559  469001 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:37:12.273566  469001 kubeadm.go:319] 
	I1101 09:37:12.273635  469001 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:37:12.273642  469001 kubeadm.go:319] 
	I1101 09:37:12.273698  469001 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:37:12.273707  469001 kubeadm.go:319] 
	I1101 09:37:12.273779  469001 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:37:12.273905  469001 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:37:12.274005  469001 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:37:12.274014  469001 kubeadm.go:319] 
	I1101 09:37:12.274128  469001 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:37:12.274238  469001 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:37:12.274245  469001 kubeadm.go:319] 
	I1101 09:37:12.274309  469001 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token l1x5vv.tpepz1lanbv8m0r3 \
	I1101 09:37:12.274390  469001 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d57366502d53ab68e7255ec440f7ddb3b2750eee283c073ca041cba72a5f4777 \
	I1101 09:37:12.274411  469001 kubeadm.go:319] 	--control-plane 
	I1101 09:37:12.274417  469001 kubeadm.go:319] 
	I1101 09:37:12.274533  469001 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:37:12.274541  469001 kubeadm.go:319] 
	I1101 09:37:12.274644  469001 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token l1x5vv.tpepz1lanbv8m0r3 \
	I1101 09:37:12.274796  469001 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d57366502d53ab68e7255ec440f7ddb3b2750eee283c073ca041cba72a5f4777 
	I1101 09:37:12.274830  469001 cni.go:84] Creating CNI manager for ""
	I1101 09:37:12.274851  469001 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 09:37:12.276321  469001 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 09:37:12.277668  469001 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 09:37:12.294410  469001 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 09:37:12.325397  469001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:37:12.325482  469001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:37:12.325504  469001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-171954 minikube.k8s.io/updated_at=2025_11_01T09_37_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=addons-171954 minikube.k8s.io/primary=true
	I1101 09:37:12.468402  469001 ops.go:34] apiserver oom_adj: -16
	I1101 09:37:12.484064  469001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:37:12.984628  469001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:37:13.485231  469001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:37:13.984881  469001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:37:14.484349  469001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:37:14.984963  469001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:37:15.484608  469001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:37:15.984767  469001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:37:16.484258  469001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:37:16.984932  469001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:37:17.107824  469001 kubeadm.go:1114] duration metric: took 4.782393981s to wait for elevateKubeSystemPrivileges
	I1101 09:37:17.107880  469001 kubeadm.go:403] duration metric: took 17.055090878s to StartCluster
	I1101 09:37:17.107909  469001 settings.go:142] acquiring lock: {Name:mk4902adee1dd0d19abbbcc39f8cd9db61b94cb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:37:17.108095  469001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-464466/kubeconfig
	I1101 09:37:17.108768  469001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-464466/kubeconfig: {Name:mkeadc537146a33d7c4e31279881deb75116449c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:37:17.109078  469001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:37:17.109109  469001 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1101 09:37:17.109184  469001 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 09:37:17.109338  469001 config.go:182] Loaded profile config "addons-171954": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1101 09:37:17.109371  469001 addons.go:70] Setting yakd=true in profile "addons-171954"
	I1101 09:37:17.109396  469001 addons.go:239] Setting addon yakd=true in "addons-171954"
	I1101 09:37:17.109407  469001 addons.go:70] Setting cloud-spanner=true in profile "addons-171954"
	I1101 09:37:17.109423  469001 addons.go:239] Setting addon cloud-spanner=true in "addons-171954"
	I1101 09:37:17.109406  469001 addons.go:70] Setting default-storageclass=true in profile "addons-171954"
	I1101 09:37:17.109450  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.109459  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.109474  469001 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-171954"
	I1101 09:37:17.109526  469001 addons.go:70] Setting registry-creds=true in profile "addons-171954"
	I1101 09:37:17.109532  469001 addons.go:70] Setting storage-provisioner=true in profile "addons-171954"
	I1101 09:37:17.109575  469001 addons.go:239] Setting addon registry-creds=true in "addons-171954"
	I1101 09:37:17.109598  469001 addons.go:239] Setting addon storage-provisioner=true in "addons-171954"
	I1101 09:37:17.109619  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.109675  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.110354  469001 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-171954"
	I1101 09:37:17.110375  469001 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-171954"
	I1101 09:37:17.110406  469001 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-171954"
	I1101 09:37:17.110409  469001 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-171954"
	I1101 09:37:17.110438  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.110451  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.110531  469001 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-171954"
	I1101 09:37:17.110577  469001 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-171954"
	I1101 09:37:17.110624  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.109397  469001 addons.go:70] Setting registry=true in profile "addons-171954"
	I1101 09:37:17.111214  469001 addons.go:239] Setting addon registry=true in "addons-171954"
	I1101 09:37:17.111227  469001 addons.go:70] Setting volcano=true in profile "addons-171954"
	I1101 09:37:17.111243  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.111244  469001 addons.go:239] Setting addon volcano=true in "addons-171954"
	I1101 09:37:17.111272  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.111295  469001 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-171954"
	I1101 09:37:17.111329  469001 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-171954"
	I1101 09:37:17.111935  469001 addons.go:70] Setting volumesnapshots=true in profile "addons-171954"
	I1101 09:37:17.111958  469001 addons.go:239] Setting addon volumesnapshots=true in "addons-171954"
	I1101 09:37:17.111989  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.112059  469001 addons.go:70] Setting gcp-auth=true in profile "addons-171954"
	I1101 09:37:17.112106  469001 mustload.go:66] Loading cluster: addons-171954
	I1101 09:37:17.112203  469001 addons.go:70] Setting ingress-dns=true in profile "addons-171954"
	I1101 09:37:17.112341  469001 out.go:179] * Verifying Kubernetes components...
	I1101 09:37:17.112388  469001 addons.go:70] Setting metrics-server=true in profile "addons-171954"
	I1101 09:37:17.112411  469001 addons.go:239] Setting addon metrics-server=true in "addons-171954"
	I1101 09:37:17.112228  469001 addons.go:239] Setting addon ingress-dns=true in "addons-171954"
	I1101 09:37:17.112436  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.112595  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.112372  469001 addons.go:70] Setting ingress=true in profile "addons-171954"
	I1101 09:37:17.112928  469001 addons.go:239] Setting addon ingress=true in "addons-171954"
	I1101 09:37:17.112961  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.112378  469001 addons.go:70] Setting inspektor-gadget=true in profile "addons-171954"
	I1101 09:37:17.113342  469001 addons.go:239] Setting addon inspektor-gadget=true in "addons-171954"
	I1101 09:37:17.113377  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.112364  469001 config.go:182] Loaded profile config "addons-171954": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1101 09:37:17.114187  469001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:37:17.117340  469001 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 09:37:17.117344  469001 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 09:37:17.117414  469001 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 09:37:17.117420  469001 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:37:17.118495  469001 addons.go:239] Setting addon default-storageclass=true in "addons-171954"
	I1101 09:37:17.118542  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.118662  469001 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 09:37:17.118703  469001 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 09:37:17.118842  469001 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 09:37:17.118860  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 09:37:17.120200  469001 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-171954"
	I1101 09:37:17.120252  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.121101  469001 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 09:37:17.121126  469001 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 09:37:17.121545  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:17.121233  469001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:37:17.121227  469001 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:37:17.121977  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 09:37:17.121350  469001 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:37:17.122402  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:37:17.123601  469001 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 09:37:17.123604  469001 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 09:37:17.123602  469001 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 09:37:17.123636  469001 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1101 09:37:17.123640  469001 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 09:37:17.124073  469001 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:37:17.124865  469001 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:37:17.124865  469001 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 09:37:17.125541  469001 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:37:17.124934  469001 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:37:17.125936  469001 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:37:17.125951  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 09:37:17.125950  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 09:37:17.124935  469001 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:37:17.126051  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 09:37:17.127043  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.127187  469001 out.go:179]   - Using image docker.io/busybox:stable
	I1101 09:37:17.127241  469001 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 09:37:17.127636  469001 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 09:37:17.127254  469001 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 09:37:17.127679  469001 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 09:37:17.128119  469001 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 09:37:17.128126  469001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:37:17.128136  469001 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 09:37:17.128129  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.129036  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.129103  469001 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 09:37:17.129122  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.130482  469001 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:37:17.130048  469001 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1101 09:37:17.130778  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.130855  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.131665  469001 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 09:37:17.131681  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:37:17.131680  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.131301  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.132566  469001 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 09:37:17.132615  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.132567  469001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 09:37:17.132876  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.133815  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.133849  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.134198  469001 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 09:37:17.134241  469001 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:37:17.134254  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:37:17.134214  469001 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1101 09:37:17.134585  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.135335  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.134713  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.135720  469001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 09:37:17.136314  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.136676  469001 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:37:17.136698  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 09:37:17.138826  469001 addons.go:436] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1101 09:37:17.138847  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1101 09:37:17.139223  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.139650  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.140584  469001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 09:37:17.140718  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.140761  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.140912  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.140951  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.141108  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.141188  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.141219  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.141361  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.141683  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.141737  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.141989  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.142457  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.142491  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.142864  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.142899  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.142967  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.143015  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.143085  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.143120  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.143143  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.143197  469001 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 09:37:17.143461  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.144028  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.144060  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.144074  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.144072  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.144185  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.144831  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.144913  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.144971  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.145436  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.145841  469001 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 09:37:17.145843  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.146457  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.146493  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.146549  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.146713  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.146884  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.147180  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.147210  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.147337  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.147371  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.147379  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.147578  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.149041  469001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 09:37:17.150173  469001 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 09:37:17.150208  469001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 09:37:17.152765  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.153131  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:17.153162  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:17.153335  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:17.844345  469001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:37:17.844368  469001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:37:18.111819  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:37:18.267475  469001 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 09:37:18.267505  469001 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 09:37:18.271728  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 09:37:18.296820  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:37:18.545249  469001 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 09:37:18.545284  469001 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 09:37:18.562321  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:37:18.628493  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:37:18.675648  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:37:18.676470  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:37:18.697091  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:37:18.722630  469001 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 09:37:18.722677  469001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 09:37:18.728368  469001 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:37:18.728402  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 09:37:18.734297  469001 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 09:37:18.734320  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 09:37:18.763963  469001 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 09:37:18.764000  469001 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 09:37:18.802909  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1101 09:37:18.889357  469001 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 09:37:18.889393  469001 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 09:37:18.985497  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:37:19.042489  469001 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 09:37:19.042525  469001 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 09:37:19.099486  469001 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 09:37:19.099527  469001 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 09:37:19.182609  469001 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 09:37:19.182639  469001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 09:37:19.190902  469001 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 09:37:19.190930  469001 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 09:37:19.255454  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:37:19.331374  469001 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:37:19.331402  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 09:37:19.383046  469001 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 09:37:19.383083  469001 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 09:37:19.395056  469001 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:37:19.395086  469001 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 09:37:19.504072  469001 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 09:37:19.504103  469001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 09:37:19.531786  469001 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 09:37:19.531835  469001 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 09:37:19.749874  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:37:19.764725  469001 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:37:19.764749  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 09:37:19.774952  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:37:19.854796  469001 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:37:19.854847  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 09:37:19.995363  469001 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 09:37:19.995400  469001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 09:37:20.223986  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:37:20.278666  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:37:20.428372  469001 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 09:37:20.428407  469001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 09:37:20.729927  469001 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 09:37:20.729986  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 09:37:21.210504  469001 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.366093074s)
	I1101 09:37:21.210548  469001 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 09:37:21.210686  469001 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.366294479s)
	I1101 09:37:21.210736  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.098880888s)
	I1101 09:37:21.211644  469001 node_ready.go:35] waiting up to 6m0s for node "addons-171954" to be "Ready" ...
	I1101 09:37:21.219368  469001 node_ready.go:49] node "addons-171954" is "Ready"
	I1101 09:37:21.219400  469001 node_ready.go:38] duration metric: took 7.705857ms for node "addons-171954" to be "Ready" ...
	I1101 09:37:21.219424  469001 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:37:21.219484  469001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:37:21.473233  469001 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 09:37:21.473268  469001 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 09:37:21.731402  469001 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 09:37:21.731426  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 09:37:21.739479  469001 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-171954" context rescaled to 1 replicas
	I1101 09:37:21.869746  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.597966707s)
	I1101 09:37:21.869834  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.572952167s)
	I1101 09:37:22.292556  469001 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 09:37:22.292580  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 09:37:23.014833  469001 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:37:23.014863  469001 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 09:37:23.473164  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:37:23.899533  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.270997331s)
	I1101 09:37:23.901073  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.338710518s)
	I1101 09:37:24.532665  469001 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 09:37:24.535689  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:24.536350  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:24.536394  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:24.536588  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:25.904774  469001 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 09:37:26.356246  469001 addons.go:239] Setting addon gcp-auth=true in "addons-171954"
	I1101 09:37:26.356317  469001 host.go:66] Checking if "addons-171954" exists ...
	I1101 09:37:26.358627  469001 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 09:37:26.361268  469001 main.go:143] libmachine: domain addons-171954 has defined MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:26.361770  469001 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:d6:7e", ip: ""} in network mk-addons-171954: {Iface:virbr1 ExpiryTime:2025-11-01 10:36:46 +0000 UTC Type:0 Mac:52:54:00:09:d6:7e Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-171954 Clientid:01:52:54:00:09:d6:7e}
	I1101 09:37:26.361793  469001 main.go:143] libmachine: domain addons-171954 has defined IP address 192.168.39.221 and MAC address 52:54:00:09:d6:7e in network mk-addons-171954
	I1101 09:37:26.361969  469001 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/addons-171954/id_rsa Username:docker}
	I1101 09:37:28.425153  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.749443861s)
	I1101 09:37:28.425210  469001 addons.go:480] Verifying addon ingress=true in "addons-171954"
	I1101 09:37:28.425228  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.748717977s)
	I1101 09:37:28.425271  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.728144218s)
	I1101 09:37:28.427620  469001 out.go:179] * Verifying ingress addon...
	I1101 09:37:28.429976  469001 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 09:37:28.458222  469001 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 09:37:28.458247  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:29.012378  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:29.548265  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:30.010354  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:30.578589  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:31.066614  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:31.444457  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:32.086838  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:32.312634  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.509671825s)
	I1101 09:37:32.312717  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (13.327182118s)
	I1101 09:37:32.312812  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (13.057325062s)
	W1101 09:37:32.312849  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:32.312871  469001 retry.go:31] will retry after 199.482504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:32.312867  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.562944377s)
	I1101 09:37:32.312904  469001 addons.go:480] Verifying addon registry=true in "addons-171954"
	I1101 09:37:32.312952  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.537963695s)
	I1101 09:37:32.312989  469001 addons.go:480] Verifying addon metrics-server=true in "addons-171954"
	I1101 09:37:32.313050  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (12.089032021s)
	I1101 09:37:32.313167  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (12.03445961s)
	I1101 09:37:32.313212  469001 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (11.09371243s)
	I1101 09:37:32.313235  469001 api_server.go:72] duration metric: took 15.204084789s to wait for apiserver process to appear ...
	I1101 09:37:32.313246  469001 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:37:32.313266  469001 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	W1101 09:37:32.313206  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:37:32.313366  469001 retry.go:31] will retry after 281.35298ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:37:32.313478  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.840255138s)
	I1101 09:37:32.313508  469001 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-171954"
	I1101 09:37:32.313516  469001 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.954865247s)
	I1101 09:37:32.314672  469001 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-171954 service yakd-dashboard -n yakd-dashboard
	
	I1101 09:37:32.314676  469001 out.go:179] * Verifying registry addon...
	I1101 09:37:32.315380  469001 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 09:37:32.315414  469001 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:37:32.316977  469001 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 09:37:32.317765  469001 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 09:37:32.317765  469001 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 09:37:32.318307  469001 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 09:37:32.318325  469001 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 09:37:32.345252  469001 api_server.go:279] https://192.168.39.221:8443/healthz returned 200:
	ok
	I1101 09:37:32.360150  469001 api_server.go:141] control plane version: v1.34.1
	I1101 09:37:32.360197  469001 api_server.go:131] duration metric: took 46.942215ms to wait for apiserver health ...
	I1101 09:37:32.360210  469001 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:37:32.360151  469001 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:37:32.360299  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:32.360933  469001 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:37:32.360954  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:32.410304  469001 system_pods.go:59] 19 kube-system pods found
	I1101 09:37:32.410351  469001 system_pods.go:61] "amd-gpu-device-plugin-6d6zp" [f0649712-baaf-4687-8794-e4f7fa1abbf3] Running
	I1101 09:37:32.410359  469001 system_pods.go:61] "coredns-66bc5c9577-82rzx" [7d7cc4f6-2672-4094-879a-46e67b28e435] Running
	I1101 09:37:32.410370  469001 system_pods.go:61] "csi-hostpath-attacher-0" [d7af45d7-28a8-4d42-965c-771bedd789c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:37:32.410379  469001 system_pods.go:61] "csi-hostpath-resizer-0" [079d5731-0a56-4df7-a492-02ec65ed4d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:37:32.410390  469001 system_pods.go:61] "csi-hostpathplugin-lz42x" [d16a2925-b54d-4cb2-86b7-2cadf629f685] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:37:32.410398  469001 system_pods.go:61] "etcd-addons-171954" [59525e9e-020a-44e6-9cc1-fe92d51aef32] Running
	I1101 09:37:32.410403  469001 system_pods.go:61] "kube-apiserver-addons-171954" [83f76de6-e6a2-4511-a04b-5b66a45c4520] Running
	I1101 09:37:32.410407  469001 system_pods.go:61] "kube-controller-manager-addons-171954" [2ae5c0d5-2cab-4c36-aeb8-435379aa7f6c] Running
	I1101 09:37:32.410415  469001 system_pods.go:61] "kube-ingress-dns-minikube" [e4975d50-83f3-4d4e-b9a0-064a5cefa3cb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:37:32.410420  469001 system_pods.go:61] "kube-proxy-mn5vv" [1513deb8-f836-447f-9bdd-5d16c33fdf8b] Running
	I1101 09:37:32.410426  469001 system_pods.go:61] "kube-scheduler-addons-171954" [c17c94bf-7ee0-4da7-9dd9-9ea946253c7b] Running
	I1101 09:37:32.410436  469001 system_pods.go:61] "metrics-server-85b7d694d7-mtpch" [199cb650-4ad7-4124-a7b0-d5f45cafb213] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:37:32.410454  469001 system_pods.go:61] "nvidia-device-plugin-daemonset-h99d4" [cb2516a7-94c8-4a1d-ac21-4d4c99fa0089] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:37:32.410467  469001 system_pods.go:61] "registry-6b586f9694-9bkw7" [243856d4-1236-40bb-861f-009deae9b590] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:37:32.410475  469001 system_pods.go:61] "registry-creds-764b6fb674-cp7kw" [08db67ff-5518-47e9-b174-3d08d6b9cf74] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:37:32.410492  469001 system_pods.go:61] "registry-proxy-mq64f" [2bb47043-faae-42c9-bc07-488e91a1a3c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:37:32.410503  469001 system_pods.go:61] "snapshot-controller-7d9fbc56b8-844vh" [4d57dbd1-1833-4218-91c5-3745f27e0278] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:37:32.410514  469001 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jw7mp" [1c5dba75-d888-47d8-984c-5b3613134aa9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:37:32.410520  469001 system_pods.go:61] "storage-provisioner" [fdd13e9f-4cbc-4034-b546-c8cb8b8a550e] Running
	I1101 09:37:32.410529  469001 system_pods.go:74] duration metric: took 50.304556ms to wait for pod list to return data ...
	I1101 09:37:32.410543  469001 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:37:32.447390  469001 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 09:37:32.447428  469001 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 09:37:32.502400  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:32.513402  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:37:32.513748  469001 default_sa.go:45] found service account: "default"
	I1101 09:37:32.513774  469001 default_sa.go:55] duration metric: took 103.223203ms for default service account to be created ...
	I1101 09:37:32.513784  469001 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:37:32.595762  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:37:32.602903  469001 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:37:32.602936  469001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 09:37:32.705771  469001 system_pods.go:86] 19 kube-system pods found
	I1101 09:37:32.705813  469001 system_pods.go:89] "amd-gpu-device-plugin-6d6zp" [f0649712-baaf-4687-8794-e4f7fa1abbf3] Running
	I1101 09:37:32.705824  469001 system_pods.go:89] "coredns-66bc5c9577-82rzx" [7d7cc4f6-2672-4094-879a-46e67b28e435] Running
	I1101 09:37:32.705835  469001 system_pods.go:89] "csi-hostpath-attacher-0" [d7af45d7-28a8-4d42-965c-771bedd789c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:37:32.705843  469001 system_pods.go:89] "csi-hostpath-resizer-0" [079d5731-0a56-4df7-a492-02ec65ed4d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:37:32.705855  469001 system_pods.go:89] "csi-hostpathplugin-lz42x" [d16a2925-b54d-4cb2-86b7-2cadf629f685] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:37:32.705870  469001 system_pods.go:89] "etcd-addons-171954" [59525e9e-020a-44e6-9cc1-fe92d51aef32] Running
	I1101 09:37:32.705875  469001 system_pods.go:89] "kube-apiserver-addons-171954" [83f76de6-e6a2-4511-a04b-5b66a45c4520] Running
	I1101 09:37:32.705882  469001 system_pods.go:89] "kube-controller-manager-addons-171954" [2ae5c0d5-2cab-4c36-aeb8-435379aa7f6c] Running
	I1101 09:37:32.705892  469001 system_pods.go:89] "kube-ingress-dns-minikube" [e4975d50-83f3-4d4e-b9a0-064a5cefa3cb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:37:32.705895  469001 system_pods.go:89] "kube-proxy-mn5vv" [1513deb8-f836-447f-9bdd-5d16c33fdf8b] Running
	I1101 09:37:32.705904  469001 system_pods.go:89] "kube-scheduler-addons-171954" [c17c94bf-7ee0-4da7-9dd9-9ea946253c7b] Running
	I1101 09:37:32.705910  469001 system_pods.go:89] "metrics-server-85b7d694d7-mtpch" [199cb650-4ad7-4124-a7b0-d5f45cafb213] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:37:32.705916  469001 system_pods.go:89] "nvidia-device-plugin-daemonset-h99d4" [cb2516a7-94c8-4a1d-ac21-4d4c99fa0089] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:37:32.705927  469001 system_pods.go:89] "registry-6b586f9694-9bkw7" [243856d4-1236-40bb-861f-009deae9b590] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:37:32.705936  469001 system_pods.go:89] "registry-creds-764b6fb674-cp7kw" [08db67ff-5518-47e9-b174-3d08d6b9cf74] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:37:32.705947  469001 system_pods.go:89] "registry-proxy-mq64f" [2bb47043-faae-42c9-bc07-488e91a1a3c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:37:32.705957  469001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-844vh" [4d57dbd1-1833-4218-91c5-3745f27e0278] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:37:32.705964  469001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jw7mp" [1c5dba75-d888-47d8-984c-5b3613134aa9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:37:32.705968  469001 system_pods.go:89] "storage-provisioner" [fdd13e9f-4cbc-4034-b546-c8cb8b8a550e] Running
	I1101 09:37:32.705976  469001 system_pods.go:126] duration metric: took 192.18624ms to wait for k8s-apps to be running ...
	I1101 09:37:32.705986  469001 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:37:32.706039  469001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:37:32.895946  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:37:32.962239  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:32.962952  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:32.967962  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:33.327137  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:33.329587  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:33.435579  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:33.822210  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:33.827374  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:33.934685  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:34.332514  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:34.332564  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:34.452924  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:34.841928  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:34.841926  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:34.935046  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:35.324608  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:35.325902  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:35.439666  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:35.644508  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.131068106s)
	W1101 09:37:35.644571  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:35.644606  469001 retry.go:31] will retry after 261.543474ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:35.644644  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.048838791s)
	I1101 09:37:35.644709  469001 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.938644536s)
	I1101 09:37:35.644739  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.748767341s)
	I1101 09:37:35.644744  469001 system_svc.go:56] duration metric: took 2.938751866s WaitForService to wait for kubelet
	I1101 09:37:35.644760  469001 kubeadm.go:587] duration metric: took 18.535607727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:37:35.644793  469001 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:37:35.646227  469001 addons.go:480] Verifying addon gcp-auth=true in "addons-171954"
	I1101 09:37:35.647983  469001 out.go:179] * Verifying gcp-auth addon...
	I1101 09:37:35.650109  469001 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 09:37:35.652490  469001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 09:37:35.652513  469001 node_conditions.go:123] node cpu capacity is 2
	I1101 09:37:35.652530  469001 node_conditions.go:105] duration metric: took 7.730797ms to run NodePressure ...
	I1101 09:37:35.652541  469001 start.go:242] waiting for startup goroutines ...
	I1101 09:37:35.654349  469001 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 09:37:35.654364  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:35.827898  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:35.828078  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:35.907201  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:37:35.934961  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:36.155852  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:36.321105  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:36.324595  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:36.437794  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:36.654613  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:36.824201  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:36.825127  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:37:36.847760  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:36.847796  469001 retry.go:31] will retry after 710.923493ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:36.934783  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:37.155472  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:37.322543  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:37.322847  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:37.434763  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:37.559828  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:37:37.655652  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:37.823274  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:37.823302  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:37.933255  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:38.153443  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:37:38.278110  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:38.278159  469001 retry.go:31] will retry after 528.472309ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:38.322503  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:38.322604  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:38.434366  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:38.654399  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:38.807693  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:37:38.824472  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:38.824526  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:38.935770  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:39.154096  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:39.321243  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:39.321276  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:39.436064  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:37:39.622361  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:39.622406  469001 retry.go:31] will retry after 1.033458253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:39.653538  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:39.824005  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:39.824243  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:39.933742  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:40.154116  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:40.322741  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:40.323176  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:40.434104  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:40.656656  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:37:40.657499  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:40.823650  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:40.824545  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:40.936924  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:41.158121  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:41.322524  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:41.323655  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:37:41.429244  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:41.429278  469001 retry.go:31] will retry after 2.648062072s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:41.433302  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:41.653526  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:41.822055  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:41.822388  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:41.933797  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:42.154362  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:42.322677  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:42.322925  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:42.434924  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:42.654607  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:42.823040  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:42.823381  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:42.935633  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:43.154666  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:43.323734  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:43.323845  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:43.434611  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:43.654024  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:43.822483  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:43.822600  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:43.935017  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:44.078024  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:37:44.156834  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:44.327871  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:44.328195  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:44.434758  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:44.654942  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:44.820641  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:44.822384  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:44.934294  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:37:44.998789  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:44.998832  469001 retry.go:31] will retry after 3.435447798s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:45.156197  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:45.322083  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:45.322148  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:45.434889  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:45.726739  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:45.847364  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:45.848938  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:45.934330  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:46.155610  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:46.322640  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:46.323272  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:46.433969  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:46.654624  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:46.822985  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:46.823853  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:46.934562  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:47.155856  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:47.324017  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:47.325782  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:47.433941  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:47.654559  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:47.826812  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:47.826958  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:47.935930  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:48.154678  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:48.324157  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:48.329836  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:48.435153  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:37:48.472736  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:48.655446  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:48.824600  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:48.826377  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:48.934386  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:49.154739  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:37:49.284099  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:49.284135  469001 retry.go:31] will retry after 4.224315269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:49.322013  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:49.322913  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:49.435161  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:49.654972  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:49.823187  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:49.823625  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:49.936742  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:50.154223  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:50.322466  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:50.322997  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:50.436881  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:50.655959  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:50.822195  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:50.822984  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:50.933953  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:51.156582  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:51.324060  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:51.324450  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:51.435708  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:51.655319  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:51.825471  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:51.825685  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:51.934838  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:52.154139  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:52.322583  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:37:52.323599  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:52.434483  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:52.654146  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:52.823735  469001 kapi.go:107] duration metric: took 20.505968488s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 09:37:52.823735  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:52.934269  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:53.156188  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:53.324182  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:53.439282  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:53.509370  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:37:53.653487  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:53.824703  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:53.936560  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:54.153439  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:54.323618  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:54.434184  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:54.580169  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.070723402s)
	W1101 09:37:54.580243  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:54.580285  469001 retry.go:31] will retry after 3.625792733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:54.654708  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:54.823465  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:54.936435  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:55.156104  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:55.322602  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:55.435116  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:55.655078  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:55.821643  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:55.936206  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:56.155345  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:56.324303  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:56.434780  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:56.656746  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:56.823410  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:56.933668  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:57.155476  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:57.323326  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:57.434020  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:57.654949  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:57.821986  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:57.935108  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:58.154447  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:58.206641  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:37:58.323437  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:58.434772  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:58.657649  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:58.823983  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:58.934997  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:59.156122  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:59.273779  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.067089377s)
	W1101 09:37:59.273848  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:59.273871  469001 retry.go:31] will retry after 10.853888481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:37:59.323896  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:59.434866  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:37:59.655868  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:37:59.822866  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:37:59.936388  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:00.160425  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:00.347336  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:00.506767  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:00.655331  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:00.825676  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:00.936875  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:01.154159  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:01.323627  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:01.436321  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:01.655092  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:01.823571  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:01.938252  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:02.154019  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:02.324357  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:02.436229  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:02.657831  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:02.822441  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:02.935962  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:03.155951  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:03.361497  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:03.591165  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:03.818138  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:03.821127  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:03.936190  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:04.153897  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:04.321702  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:04.435648  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:04.654264  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:04.822223  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:04.933896  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:05.159784  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:05.328378  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:05.437072  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:05.653435  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:05.822974  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:05.935602  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:06.159597  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:06.325120  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:06.435037  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:06.656141  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:06.822414  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:06.934993  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:07.157716  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:07.323127  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:07.437067  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:07.653568  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:07.932779  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:07.934896  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:08.154794  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:08.323824  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:08.435013  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:08.653745  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:08.821203  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:08.933835  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:09.155498  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:09.327864  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:09.435206  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:09.654833  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:09.821682  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:09.937270  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:10.128417  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:38:10.155332  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:10.322407  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:10.438179  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:10.654417  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:10.824260  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:10.933889  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:38:10.947949  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:38:10.947985  469001 retry.go:31] will retry after 11.955416473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:38:11.156700  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:11.322605  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:11.435613  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:11.655288  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:11.822028  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:11.933722  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:12.153897  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:12.321714  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:12.439240  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:12.654065  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:12.822158  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:12.935044  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:13.157269  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:13.324006  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:13.435530  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:13.655691  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:13.824310  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:13.935405  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:14.155013  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:14.321899  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:14.434329  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:14.656214  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:14.822558  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:14.934985  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:15.154661  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:15.329329  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:15.433508  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:15.673838  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:15.824902  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:15.934512  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:16.155076  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:16.323022  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:16.435174  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:16.653875  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:16.821675  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:16.935561  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:17.155188  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:17.323112  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:17.434620  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:17.654634  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:17.824546  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:17.938658  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:18.156597  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:18.323071  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:18.433625  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:18.653640  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:18.821998  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:18.935418  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:19.159465  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:19.325024  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:19.436100  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:19.655541  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:19.826330  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:19.936742  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:20.154477  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:20.325338  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:20.434993  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:20.715734  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:20.823799  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:20.933643  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:21.154599  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:21.324723  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:21.433847  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:21.658137  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:22.030001  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:22.030420  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:22.154864  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:22.322953  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:22.440580  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:22.656728  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:22.827394  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:22.904561  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:38:22.937086  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:23.157776  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:23.325118  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:23.438012  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:23.654173  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:23.824994  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:23.936907  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:24.156780  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:24.263305  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.358688323s)
	W1101 09:38:24.263366  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:38:24.263400  469001 retry.go:31] will retry after 16.096886723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:38:24.326077  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:24.448233  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:24.703727  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:24.822463  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:24.934963  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:25.153989  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:25.321843  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:25.434728  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:25.653784  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:25.822906  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:25.934843  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:26.156451  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:26.323652  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:26.437757  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:26.655025  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:26.823540  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:26.935940  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:27.154883  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:27.321943  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:27.434664  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:27.655217  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:27.822753  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:27.934132  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:28.155676  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:28.335885  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:28.538282  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:28.667126  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:28.821849  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:28.938094  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:29.164964  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:29.325409  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:29.436230  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:29.654457  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:29.823456  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:29.934625  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:30.159086  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:30.327098  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:30.433735  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:30.654352  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:30.825719  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:30.934956  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:31.156112  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:31.322364  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:31.433563  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:31.655397  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:31.827849  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:31.936627  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:32.157062  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:32.326024  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:32.437662  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:32.655029  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:32.828035  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:32.936262  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:33.155243  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:33.323210  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:33.434451  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:33.655287  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:33.823200  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:33.959270  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:34.164133  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:34.323506  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:34.432900  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:34.654285  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:34.822767  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:34.934257  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:35.153248  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:35.323548  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:35.437013  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:35.655416  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:35.822464  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:35.934650  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:36.157744  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:36.323318  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:36.434503  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:36.654130  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:36.823147  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:36.933545  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:37.162469  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:37.396762  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:37.438931  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:37.659993  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:37.823285  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:37.933168  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:38.159516  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:38.323854  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:38.436025  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:38.654388  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:38.944318  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:38.944690  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:39.170102  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:39.325426  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:39.434382  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:39.656138  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:39.824113  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:39.933788  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:40.155963  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:40.323270  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:40.361439  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:38:40.435172  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:40.656786  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:40.822664  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:40.936133  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:41.158135  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:41.328738  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:41.434619  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:41.542439  469001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.180949279s)
	W1101 09:38:41.542503  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:38:41.542527  469001 retry.go:31] will retry after 19.708052248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:38:41.655219  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:41.824603  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:41.934784  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:42.154922  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:42.323783  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:42.528646  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:42.656652  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:42.826618  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:42.946679  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:43.157433  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:43.325456  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:43.441660  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:43.658874  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:43.825130  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:43.935792  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:44.155290  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:44.327317  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:44.434067  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:44.656700  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:44.832635  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:44.938408  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:45.360325  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:45.360409  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:45.499518  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:45.658791  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:45.839037  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:45.937555  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:46.155279  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:46.323939  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:46.434883  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:46.657933  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:46.824599  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:46.934311  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:47.154922  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:47.321988  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:47.434527  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:47.660757  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:47.822413  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:47.934540  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:48.153670  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:48.323275  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:48.435622  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:48.654886  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:48.822155  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:48.933852  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:49.155248  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:49.324529  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:49.434576  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:49.654880  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:49.822916  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:49.937684  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:50.155327  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:50.324818  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:50.434248  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:50.654796  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:50.876707  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:50.978859  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:51.157125  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:51.324989  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:51.436892  469001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:38:51.654881  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:51.823168  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:51.941715  469001 kapi.go:107] duration metric: took 1m23.511737587s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 09:38:52.155551  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:52.334252  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:52.656285  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:52.824405  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:53.159008  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:53.324021  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:53.657322  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:53.823707  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:54.154482  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:54.323679  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:54.655142  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:54.824368  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:55.157235  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:55.359694  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:55.658947  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:55.826774  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:56.153882  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:56.324055  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:56.654855  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:56.823480  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:57.154734  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:57.324144  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:57.655469  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:57.824453  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:58.156221  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:58.322063  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:58.654850  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:58.849515  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:38:59.158019  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:38:59.321885  469001 kapi.go:107] duration metric: took 1m27.004116248s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 09:38:59.654081  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:00.154203  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:00.654530  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:01.154386  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:01.251614  469001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:39:01.653538  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:39:02.130927  469001 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:39:02.131106  469001 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:39:02.154552  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:02.654417  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:03.154926  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:03.653480  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:04.155289  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:04.655265  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:05.154194  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:05.655422  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:06.154937  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:06.653741  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:07.154172  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:07.654871  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:08.153587  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:08.654683  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:09.154541  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:09.654680  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:10.154457  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:10.655464  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:11.154261  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:11.654484  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:12.154946  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:12.654404  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:13.154291  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:13.654050  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:14.154418  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:14.654294  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:15.154229  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:15.654772  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:16.153849  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:16.654239  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:17.154658  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:17.655471  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:18.154328  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:18.654624  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:19.154282  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:19.654270  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:20.154566  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:20.655189  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:21.154382  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:21.654380  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:22.154921  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:22.654669  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:23.154429  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:23.654492  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:24.155236  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:24.654463  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:25.154556  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:25.655654  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:26.154197  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:26.654488  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:27.154533  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:27.654447  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:28.154489  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:28.655186  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:29.155122  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:29.654283  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:30.154227  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:30.654414  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:31.154482  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:31.653859  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:32.153596  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:32.655631  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:33.154788  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:33.653517  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:34.154687  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:34.654890  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:35.153746  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:35.656091  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:36.154692  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:36.655528  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:37.154874  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:37.654215  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:38.154439  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:38.654607  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:39.155180  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:39.654148  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:40.154016  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:40.654669  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:41.154914  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:41.653675  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:42.154752  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:42.653782  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:43.153732  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:43.653866  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:44.153661  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:44.654090  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:45.154232  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:45.655532  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:46.154316  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:46.654831  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:47.154017  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:47.654229  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:48.153856  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:48.654273  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:49.154203  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:49.656205  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:50.154965  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:50.654486  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:51.155347  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:51.654093  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:52.154200  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:52.655188  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:53.155767  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:53.655101  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:54.154398  469001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:39:54.654850  469001 kapi.go:107] duration metric: took 2m19.004737641s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 09:39:54.656684  469001 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-171954 cluster.
	I1101 09:39:54.658076  469001 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 09:39:54.659361  469001 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 09:39:54.660686  469001 out.go:179] * Enabled addons: default-storageclass, cloud-spanner, amd-gpu-device-plugin, registry-creds, storage-provisioner, ingress-dns, nvidia-device-plugin, volcano, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1101 09:39:54.661851  469001 addons.go:515] duration metric: took 2m37.552663662s for enable addons: enabled=[default-storageclass cloud-spanner amd-gpu-device-plugin registry-creds storage-provisioner ingress-dns nvidia-device-plugin volcano metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1101 09:39:54.661910  469001 start.go:247] waiting for cluster config update ...
	I1101 09:39:54.661943  469001 start.go:256] writing updated cluster config ...
	I1101 09:39:54.662266  469001 ssh_runner.go:195] Run: rm -f paused
	I1101 09:39:54.669287  469001 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:39:54.673898  469001 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-82rzx" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:39:54.682533  469001 pod_ready.go:94] pod "coredns-66bc5c9577-82rzx" is "Ready"
	I1101 09:39:54.682555  469001 pod_ready.go:86] duration metric: took 8.632355ms for pod "coredns-66bc5c9577-82rzx" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:39:54.685166  469001 pod_ready.go:83] waiting for pod "etcd-addons-171954" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:39:54.690247  469001 pod_ready.go:94] pod "etcd-addons-171954" is "Ready"
	I1101 09:39:54.690269  469001 pod_ready.go:86] duration metric: took 5.080035ms for pod "etcd-addons-171954" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:39:54.692975  469001 pod_ready.go:83] waiting for pod "kube-apiserver-addons-171954" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:39:54.699709  469001 pod_ready.go:94] pod "kube-apiserver-addons-171954" is "Ready"
	I1101 09:39:54.699734  469001 pod_ready.go:86] duration metric: took 6.740881ms for pod "kube-apiserver-addons-171954" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:39:54.702351  469001 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-171954" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:39:55.074313  469001 pod_ready.go:94] pod "kube-controller-manager-addons-171954" is "Ready"
	I1101 09:39:55.074350  469001 pod_ready.go:86] duration metric: took 371.971687ms for pod "kube-controller-manager-addons-171954" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:39:55.274528  469001 pod_ready.go:83] waiting for pod "kube-proxy-mn5vv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:39:55.674635  469001 pod_ready.go:94] pod "kube-proxy-mn5vv" is "Ready"
	I1101 09:39:55.674666  469001 pod_ready.go:86] duration metric: took 400.11011ms for pod "kube-proxy-mn5vv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:39:55.875985  469001 pod_ready.go:83] waiting for pod "kube-scheduler-addons-171954" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:39:56.274679  469001 pod_ready.go:94] pod "kube-scheduler-addons-171954" is "Ready"
	I1101 09:39:56.274711  469001 pod_ready.go:86] duration metric: took 398.69081ms for pod "kube-scheduler-addons-171954" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:39:56.274723  469001 pod_ready.go:40] duration metric: took 1.605395963s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:39:56.321908  469001 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:39:56.323554  469001 out.go:179] * Done! kubectl is now configured to use "addons-171954" cluster and "default" namespace by default
	
	
	==> Docker <==
	Nov 01 09:41:44 addons-171954 cri-dockerd[1420]: time="2025-11-01T09:41:44Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Nov 01 09:41:51 addons-171954 dockerd[1552]: time="2025-11-01T09:41:51.618798949Z" level=info msg="ignoring event" container=e2bc94d58e3ee057aff7e56cf728f00dee0bdf44b656c5804ee5ac77d4df261f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:41:51 addons-171954 dockerd[1552]: time="2025-11-01T09:41:51.749259665Z" level=info msg="ignoring event" container=2129b1cc0144b9b9a12ca92a88a8869a292808451b375e66d1f3223bf80587af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:04 addons-171954 cri-dockerd[1420]: time="2025-11-01T09:42:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cd2ae67e2687be4a6f7a1a85dacc83bd22d6b9b5f1302945b7cf55d457d2d550/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Nov 01 09:42:04 addons-171954 cri-dockerd[1420]: time="2025-11-01T09:42:04Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Nov 01 09:42:05 addons-171954 dockerd[1552]: time="2025-11-01T09:42:05.914810357Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:42:10 addons-171954 dockerd[1552]: time="2025-11-01T09:42:10.705505342Z" level=info msg="ignoring event" container=9fcbdea89bceeb7e4999f610e586f321e02637576419add0a78ab799f47d5825 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:10 addons-171954 dockerd[1552]: time="2025-11-01T09:42:10.914202280Z" level=info msg="ignoring event" container=cd2ae67e2687be4a6f7a1a85dacc83bd22d6b9b5f1302945b7cf55d457d2d550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:12 addons-171954 dockerd[1552]: time="2025-11-01T09:42:12.462605892Z" level=info msg="ignoring event" container=d0c38c03be5f6817d32776317f30b7bec0cbebdf9df1a22944646789841daae3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:12 addons-171954 dockerd[1552]: time="2025-11-01T09:42:12.482964366Z" level=info msg="ignoring event" container=c5ca759bef2e7094d33e6343b13ebd13b4b81541b68440d18a62c86cfd25258e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:12 addons-171954 dockerd[1552]: time="2025-11-01T09:42:12.681080420Z" level=info msg="ignoring event" container=0c2ed2a8592ac9bc7f00ae50f46e57d19543c3708114c98a01a3707051c80acf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:12 addons-171954 dockerd[1552]: time="2025-11-01T09:42:12.790926013Z" level=info msg="ignoring event" container=4b0797588a14f839e8432127e99c94c08523a555fdce5815978f2d4154db3235 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:13 addons-171954 dockerd[1552]: time="2025-11-01T09:42:13.756529990Z" level=info msg="ignoring event" container=6c6cd71f6ddb840a5121a8eb1b6ecd9cef328319f57376f1545f1ed84ee4ac3c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:13 addons-171954 dockerd[1552]: time="2025-11-01T09:42:13.784517476Z" level=info msg="ignoring event" container=08d665cd572eac8bfb430b885f0d944b1878dc89278f3c940f588aef40a123eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:13 addons-171954 dockerd[1552]: time="2025-11-01T09:42:13.798741668Z" level=info msg="ignoring event" container=345c15e322a9acc76ae958923680a9fe90a35fc1492746174495663e7a658533 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:13 addons-171954 dockerd[1552]: time="2025-11-01T09:42:13.807858347Z" level=info msg="ignoring event" container=603f3f9917926850d6e93175191df2400b3644ac4bf659c28010de06598ad76b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:13 addons-171954 dockerd[1552]: time="2025-11-01T09:42:13.820660628Z" level=info msg="ignoring event" container=6517b1f250e36e81f5f83d3a6068edf056a01fdfb47f77430c6d0fc623bca4b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:13 addons-171954 dockerd[1552]: time="2025-11-01T09:42:13.831735996Z" level=info msg="ignoring event" container=0439241552f7ea0481feba37f2e0797cd345d92093d386d2a9b51f8ef1d3d6ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:13 addons-171954 dockerd[1552]: time="2025-11-01T09:42:13.846305162Z" level=info msg="ignoring event" container=e0a2e4c8ba7695ae0afe1982f02b4106deb3adc4eaa2d8ef63df5ee3f514ce79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:13 addons-171954 dockerd[1552]: time="2025-11-01T09:42:13.854206436Z" level=info msg="ignoring event" container=9414a4d96786485b0858d85c7d3b6350ace9084a43716a2c0ddea36ca2a81136 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:14 addons-171954 dockerd[1552]: time="2025-11-01T09:42:14.202272481Z" level=info msg="ignoring event" container=151539ea757b7838a9cb0715584b3d1374dc5933216ff2cae3ce758bab66a556 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:14 addons-171954 dockerd[1552]: time="2025-11-01T09:42:14.241724387Z" level=info msg="ignoring event" container=504a83e7dc8f92d57cf5613dfe5723ba947e1f9a67f3fecd27115745dbafcdcc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:14 addons-171954 dockerd[1552]: time="2025-11-01T09:42:14.251727704Z" level=info msg="ignoring event" container=bd4d1568d283fbee528a127fb5bbc8bf09dd2ffb73d2d63f00ebd90e0a223357 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 09:42:58 addons-171954 dockerd[1552]: time="2025-11-01T09:42:58.887478050Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:42:58 addons-171954 cri-dockerd[1420]: time="2025-11-01T09:42:58Z" level=info msg="Stop pulling image busybox:stable: stable: Pulling from library/busybox"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3f18e8bfe82e7       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                         3 minutes ago       Running             hello-world-app           0                   3a4154e742429       hello-world-app-5d498dc89-dbc6r
	d12e3154db57f       nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14                                       3 minutes ago       Running             nginx                     0                   b9442583bf60b       nginx
	d23d1fd068f37       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                 3 minutes ago       Running             busybox                   0                   ac8456df38f19       busybox
	841a7d7e4af9d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:df0516c4c988694d65b19400d0990f129d5fd68f211cc826e7fdad55140626fd   6 minutes ago       Running             gadget                    0                   befd011cee2b2       gadget-8jl2d
	7ba01f690c44b       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246              6 minutes ago       Running             local-path-provisioner    0                   b0668b4e08444       local-path-provisioner-648f6765c9-kmb74
	a48485dc44618       rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                      6 minutes ago       Running             amd-gpu-device-plugin     0                   d7b465b9eae00       amd-gpu-device-plugin-6d6zp
	e84a69273dd94       6e38f40d628db                                                                                                       6 minutes ago       Running             storage-provisioner       0                   4fa4445438b57       storage-provisioner
	cad651fc183aa       52546a367cc9e                                                                                                       7 minutes ago       Running             coredns                   0                   15d621afa2bf0       coredns-66bc5c9577-82rzx
	464b6bc30e411       fc25172553d79                                                                                                       7 minutes ago       Running             kube-proxy                0                   e5a323e577059       kube-proxy-mn5vv
	31e80d2ae02fe       7dd6aaa1717ab                                                                                                       7 minutes ago       Running             kube-scheduler            0                   2961895b278d2       kube-scheduler-addons-171954
	c222c25a7e6dd       5f1f5298c888d                                                                                                       7 minutes ago       Running             etcd                      0                   dd043faab84a3       etcd-addons-171954
	715cf963e4780       c3994bc696102                                                                                                       7 minutes ago       Running             kube-apiserver            0                   4499c5a936477       kube-apiserver-addons-171954
	788466331c537       c80c8dbafe7dd                                                                                                       7 minutes ago       Running             kube-controller-manager   0                   c1516fb2444ce       kube-controller-manager-addons-171954
	
	
	==> coredns [cad651fc183a] <==
	[INFO] 10.244.0.26:60944 - 54224 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000666121s
	[INFO] 10.244.0.26:60944 - 4162 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000218106s
	[INFO] 10.244.0.26:48013 - 64116 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000420672s
	[INFO] 10.244.0.26:60944 - 13573 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073143s
	[INFO] 10.244.0.26:48013 - 52044 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000576285s
	[INFO] 10.244.0.26:60944 - 41876 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000104763s
	[INFO] 10.244.0.26:48013 - 12267 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000549943s
	[INFO] 10.244.0.26:48013 - 34407 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000158571s
	[INFO] 10.244.0.26:60944 - 32926 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000804269s
	[INFO] 10.244.0.26:51938 - 38054 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000138755s
	[INFO] 10.244.0.26:51938 - 17976 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00043302s
	[INFO] 10.244.0.26:51938 - 6845 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000333467s
	[INFO] 10.244.0.26:51938 - 49624 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000244056s
	[INFO] 10.244.0.26:51938 - 44560 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000242258s
	[INFO] 10.244.0.26:51938 - 59539 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00039903s
	[INFO] 10.244.0.26:51938 - 10787 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000641565s
	[INFO] 10.244.0.26:34276 - 25514 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000129565s
	[INFO] 10.244.0.26:34276 - 13458 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000210133s
	[INFO] 10.244.0.26:34276 - 28101 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000271175s
	[INFO] 10.244.0.26:34276 - 27084 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000147535s
	[INFO] 10.244.0.26:34276 - 16758 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000323516s
	[INFO] 10.244.0.26:34276 - 32362 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000183508s
	[INFO] 10.244.0.26:34276 - 13996 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000103223s
	[INFO] 10.244.0.32:39696 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000369289s
	[INFO] 10.244.0.32:34022 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148138s
	
	
	==> describe nodes <==
	Name:               addons-171954
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-171954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=addons-171954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_37_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-171954
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:37:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-171954
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:44:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:41:48 +0000   Sat, 01 Nov 2025 09:37:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:41:48 +0000   Sat, 01 Nov 2025 09:37:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:41:48 +0000   Sat, 01 Nov 2025 09:37:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:41:48 +0000   Sat, 01 Nov 2025 09:37:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    addons-171954
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 27488242efd54e9197bb46360de477f6
	  System UUID:                27488242-efd5-4e91-97bb-46360de477f6
	  Boot ID:                    02bb73e8-d738-49a4-822b-b0db91963f1d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  default                     hello-world-app-5d498dc89-dbc6r            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  default                     test-local-path                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  gadget                      gadget-8jl2d                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m59s
	  kube-system                 amd-gpu-device-plugin-6d6zp                0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	  kube-system                 coredns-66bc5c9577-82rzx                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m8s
	  kube-system                 etcd-addons-171954                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m13s
	  kube-system                 kube-apiserver-addons-171954               250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 kube-controller-manager-addons-171954      200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 kube-proxy-mn5vv                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 kube-scheduler-addons-171954               100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m1s
	  local-path-storage          local-path-provisioner-648f6765c9-kmb74    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m6s                   kube-proxy       
	  Normal  Starting                 7m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m20s (x8 over 7m20s)  kubelet          Node addons-171954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m20s (x8 over 7m20s)  kubelet          Node addons-171954 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m20s (x7 over 7m20s)  kubelet          Node addons-171954 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m13s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m13s                  kubelet          Node addons-171954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m13s                  kubelet          Node addons-171954 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m13s                  kubelet          Node addons-171954 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m12s                  kubelet          Node addons-171954 status is now: NodeReady
	  Normal  RegisteredNode           7m9s                   node-controller  Node addons-171954 event: Registered Node addons-171954 in Controller
	
	
	==> dmesg <==
	[  +5.839952] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.236178] kauditd_printk_skb: 116 callbacks suppressed
	[  +1.239180] kauditd_printk_skb: 195 callbacks suppressed
	[  +6.538731] kauditd_printk_skb: 49 callbacks suppressed
	[  +3.119921] kauditd_printk_skb: 109 callbacks suppressed
	[  +2.017235] kauditd_printk_skb: 28 callbacks suppressed
	[  +4.653085] kauditd_printk_skb: 41 callbacks suppressed
	[Nov 1 09:39] kauditd_printk_skb: 11 callbacks suppressed
	[Nov 1 09:40] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.560671] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.109320] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.301630] kauditd_printk_skb: 66 callbacks suppressed
	[  +2.107272] kauditd_printk_skb: 28 callbacks suppressed
	[ +11.322490] kauditd_printk_skb: 5 callbacks suppressed
	[  +4.926946] kauditd_printk_skb: 22 callbacks suppressed
	[Nov 1 09:41] kauditd_printk_skb: 94 callbacks suppressed
	[  +4.309383] kauditd_printk_skb: 36 callbacks suppressed
	[  +3.884924] kauditd_printk_skb: 139 callbacks suppressed
	[  +2.807002] kauditd_printk_skb: 195 callbacks suppressed
	[  +3.218339] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.266550] kauditd_printk_skb: 23 callbacks suppressed
	[  +7.763176] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.957723] kauditd_printk_skb: 9 callbacks suppressed
	[Nov 1 09:42] kauditd_printk_skb: 41 callbacks suppressed
	[  +3.519871] kauditd_printk_skb: 121 callbacks suppressed
	
	
	==> etcd [c222c25a7e6d] <==
	{"level":"warn","ts":"2025-11-01T09:38:22.020362Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"258.485873ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:38:22.020490Z","caller":"traceutil/trace.go:172","msg":"trace[1923204454] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:1161; }","duration":"258.617314ms","start":"2025-11-01T09:38:21.761865Z","end":"2025-11-01T09:38:22.020482Z","steps":["trace[1923204454] 'agreement among raft nodes before linearized reading'  (duration: 258.367536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:38:22.021861Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.637471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:38:22.022144Z","caller":"traceutil/trace.go:172","msg":"trace[1338214914] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1161; }","duration":"204.926581ms","start":"2025-11-01T09:38:21.817209Z","end":"2025-11-01T09:38:22.022136Z","steps":["trace[1338214914] 'agreement among raft nodes before linearized reading'  (duration: 204.544987ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:38:28.532135Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.174758ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:38:28.532200Z","caller":"traceutil/trace.go:172","msg":"trace[201510566] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1192; }","duration":"102.279227ms","start":"2025-11-01T09:38:28.429909Z","end":"2025-11-01T09:38:28.532189Z","steps":["trace[201510566] 'range keys from in-memory index tree'  (duration: 102.12208ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:38:37.391447Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"209.500671ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:38:37.391517Z","caller":"traceutil/trace.go:172","msg":"trace[1310258949] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:1243; }","duration":"209.583948ms","start":"2025-11-01T09:38:37.181921Z","end":"2025-11-01T09:38:37.391505Z","steps":["trace[1310258949] 'range keys from in-memory index tree'  (duration: 209.366366ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:38:37.391799Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.336729ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:38:37.391822Z","caller":"traceutil/trace.go:172","msg":"trace[156805817] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1243; }","duration":"172.362065ms","start":"2025-11-01T09:38:37.219453Z","end":"2025-11-01T09:38:37.391815Z","steps":["trace[156805817] 'range keys from in-memory index tree'  (duration: 172.276422ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:38:40.589819Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.445171ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.221\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-11-01T09:38:40.589938Z","caller":"traceutil/trace.go:172","msg":"trace[13479312] range","detail":"{range_begin:/registry/masterleases/192.168.39.221; range_end:; response_count:1; response_revision:1262; }","duration":"113.585374ms","start":"2025-11-01T09:38:40.476336Z","end":"2025-11-01T09:38:40.589921Z","steps":["trace[13479312] 'range keys from in-memory index tree'  (duration: 113.289212ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:38:45.348153Z","caller":"traceutil/trace.go:172","msg":"trace[435748588] transaction","detail":"{read_only:false; response_revision:1303; number_of_response:1; }","duration":"193.074007ms","start":"2025-11-01T09:38:45.155064Z","end":"2025-11-01T09:38:45.348138Z","steps":["trace[435748588] 'process raft request'  (duration: 192.329899ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:38:45.347883Z","caller":"traceutil/trace.go:172","msg":"trace[1468135311] linearizableReadLoop","detail":"{readStateIndex:1335; appliedIndex:1335; }","duration":"192.506498ms","start":"2025-11-01T09:38:45.155284Z","end":"2025-11-01T09:38:45.347791Z","steps":["trace[1468135311] 'read index received'  (duration: 192.33467ms)","trace[1468135311] 'applied index is now lower than readState.Index'  (duration: 170.196µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:38:45.348748Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.449199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:38:45.348796Z","caller":"traceutil/trace.go:172","msg":"trace[1273949664] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1303; }","duration":"193.505308ms","start":"2025-11-01T09:38:45.155278Z","end":"2025-11-01T09:38:45.348784Z","steps":["trace[1273949664] 'agreement among raft nodes before linearized reading'  (duration: 193.41991ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:38:45.493711Z","caller":"traceutil/trace.go:172","msg":"trace[381650539] transaction","detail":"{read_only:false; response_revision:1304; number_of_response:1; }","duration":"130.333146ms","start":"2025-11-01T09:38:45.363364Z","end":"2025-11-01T09:38:45.493697Z","steps":["trace[381650539] 'process raft request'  (duration: 126.160572ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:38:45.494786Z","caller":"traceutil/trace.go:172","msg":"trace[520082674] transaction","detail":"{read_only:false; response_revision:1305; number_of_response:1; }","duration":"129.569928ms","start":"2025-11-01T09:38:45.365207Z","end":"2025-11-01T09:38:45.494777Z","steps":["trace[520082674] 'process raft request'  (duration: 128.447376ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:38:58.843230Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.381732ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:38:58.843447Z","caller":"traceutil/trace.go:172","msg":"trace[1165364457] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1369; }","duration":"110.623326ms","start":"2025-11-01T09:38:58.732803Z","end":"2025-11-01T09:38:58.843427Z","steps":["trace[1165364457] 'range keys from in-memory index tree'  (duration: 110.277433ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:40:19.121695Z","caller":"traceutil/trace.go:172","msg":"trace[1327448757] transaction","detail":"{read_only:false; response_revision:1548; number_of_response:1; }","duration":"264.146903ms","start":"2025-11-01T09:40:18.857518Z","end":"2025-11-01T09:40:19.121665Z","steps":["trace[1327448757] 'process raft request'  (duration: 264.001745ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:41:09.922488Z","caller":"traceutil/trace.go:172","msg":"trace[66031800] transaction","detail":"{read_only:false; response_revision:1910; number_of_response:1; }","duration":"123.008521ms","start":"2025-11-01T09:41:09.799461Z","end":"2025-11-01T09:41:09.922469Z","steps":["trace[66031800] 'process raft request'  (duration: 122.909469ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:41:10.507837Z","caller":"traceutil/trace.go:172","msg":"trace[8042099] transaction","detail":"{read_only:false; response_revision:1911; number_of_response:1; }","duration":"121.295835ms","start":"2025-11-01T09:41:10.386528Z","end":"2025-11-01T09:41:10.507824Z","steps":["trace[8042099] 'process raft request'  (duration: 121.20112ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:41:10.821626Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.136565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:41:10.821698Z","caller":"traceutil/trace.go:172","msg":"trace[1095472228] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1913; }","duration":"143.251524ms","start":"2025-11-01T09:41:10.678431Z","end":"2025-11-01T09:41:10.821682Z","steps":["trace[1095472228] 'range keys from in-memory index tree'  (duration: 143.068752ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:44:24 up 7 min,  0 users,  load average: 0.29, 1.01, 0.66
	Linux addons-171954 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [715cf963e478] <==
	W1101 09:40:32.224487       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1101 09:40:50.336603       1 conn.go:339] Error on socket receive: read tcp 192.168.39.221:8443->192.168.39.1:54048: use of closed network connection
	E1101 09:40:50.554267       1 conn.go:339] Error on socket receive: read tcp 192.168.39.221:8443->192.168.39.1:54080: use of closed network connection
	I1101 09:40:59.867847       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 09:41:00.152102       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.252.120"}
	I1101 09:41:00.711806       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.190.114"}
	I1101 09:41:11.830636       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.252.114"}
	E1101 09:41:14.528101       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E1101 09:41:15.077205       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E1101 09:41:15.085127       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I1101 09:41:20.180682       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1101 09:41:50.392902       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1101 09:42:12.140490       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:42:12.140535       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:42:12.176413       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:42:12.176449       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:42:12.208511       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:42:12.208759       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:42:12.237140       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:42:12.237184       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:42:12.254936       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:42:12.255572       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1101 09:42:13.237631       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1101 09:42:13.255448       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1101 09:42:13.275532       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [788466331c53] <==
	E1101 09:43:36.116312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:43:38.079853       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:43:38.081546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:43:52.109325       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:43:52.112205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:43:52.316171       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:43:52.317437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:43:58.157036       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:43:58.158821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:44:06.023268       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:44:06.024743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:44:06.735697       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:44:06.737353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:44:07.942927       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:44:07.944390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:44:09.679164       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:44:09.680594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:44:15.893899       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:44:15.895250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:44:17.623616       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:44:17.625398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:44:22.561759       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:44:22.563628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:44:23.378811       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:44:23.380046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [464b6bc30e41] <==
	I1101 09:37:18.392707       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:37:18.494314       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:37:18.494357       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.221"]
	E1101 09:37:18.519690       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:37:18.679724       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:37:18.679791       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:37:18.679852       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:37:18.696201       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:37:18.697187       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:37:18.697213       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:37:18.707914       1 config.go:200] "Starting service config controller"
	I1101 09:37:18.707940       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:37:18.707959       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:37:18.707963       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:37:18.708145       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:37:18.708330       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:37:18.710689       1 config.go:309] "Starting node config controller"
	I1101 09:37:18.710770       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:37:18.710776       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:37:18.808186       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:37:18.808242       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:37:18.809176       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [31e80d2ae02f] <==
	E1101 09:37:08.810855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:37:08.811868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:37:08.812684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:37:08.813168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:37:08.813204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:37:08.815050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:37:08.815368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:37:08.815433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:37:08.815533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:37:09.642956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:37:09.742793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:37:09.770170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:37:09.770314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:37:09.774330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:37:09.781486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:37:09.784860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:37:09.848858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:37:09.860176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:37:09.891638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:37:09.997086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:37:10.043773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:37:10.060650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:37:10.086729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:37:10.108532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1101 09:37:12.687781       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:42:14 addons-171954 kubelet[2532]: I1101 09:42:14.840711    2532 scope.go:117] "RemoveContainer" containerID="0439241552f7ea0481feba37f2e0797cd345d92093d386d2a9b51f8ef1d3d6ed"
	Nov 01 09:42:14 addons-171954 kubelet[2532]: I1101 09:42:14.841394    2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"0439241552f7ea0481feba37f2e0797cd345d92093d386d2a9b51f8ef1d3d6ed"} err="failed to get container status \"0439241552f7ea0481feba37f2e0797cd345d92093d386d2a9b51f8ef1d3d6ed\": rpc error: code = Unknown desc = Error response from daemon: No such container: 0439241552f7ea0481feba37f2e0797cd345d92093d386d2a9b51f8ef1d3d6ed"
	Nov 01 09:42:14 addons-171954 kubelet[2532]: I1101 09:42:14.841423    2532 scope.go:117] "RemoveContainer" containerID="603f3f9917926850d6e93175191df2400b3644ac4bf659c28010de06598ad76b"
	Nov 01 09:42:14 addons-171954 kubelet[2532]: I1101 09:42:14.842347    2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"603f3f9917926850d6e93175191df2400b3644ac4bf659c28010de06598ad76b"} err="failed to get container status \"603f3f9917926850d6e93175191df2400b3644ac4bf659c28010de06598ad76b\": rpc error: code = Unknown desc = Error response from daemon: No such container: 603f3f9917926850d6e93175191df2400b3644ac4bf659c28010de06598ad76b"
	Nov 01 09:42:15 addons-171954 kubelet[2532]: I1101 09:42:15.598111    2532 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="079d5731-0a56-4df7-a492-02ec65ed4d6b" path="/var/lib/kubelet/pods/079d5731-0a56-4df7-a492-02ec65ed4d6b/volumes"
	Nov 01 09:42:15 addons-171954 kubelet[2532]: I1101 09:42:15.598957    2532 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c5dba75-d888-47d8-984c-5b3613134aa9" path="/var/lib/kubelet/pods/1c5dba75-d888-47d8-984c-5b3613134aa9/volumes"
	Nov 01 09:42:15 addons-171954 kubelet[2532]: I1101 09:42:15.599788    2532 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d57dbd1-1833-4218-91c5-3745f27e0278" path="/var/lib/kubelet/pods/4d57dbd1-1833-4218-91c5-3745f27e0278/volumes"
	Nov 01 09:42:15 addons-171954 kubelet[2532]: I1101 09:42:15.600531    2532 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d16a2925-b54d-4cb2-86b7-2cadf629f685" path="/var/lib/kubelet/pods/d16a2925-b54d-4cb2-86b7-2cadf629f685/volumes"
	Nov 01 09:42:15 addons-171954 kubelet[2532]: I1101 09:42:15.601409    2532 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7af45d7-28a8-4d42-965c-771bedd789c5" path="/var/lib/kubelet/pods/d7af45d7-28a8-4d42-965c-771bedd789c5/volumes"
	Nov 01 09:42:16 addons-171954 kubelet[2532]: E1101 09:42:16.592186    2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="f05c6362-0a7c-45d4-9050-a90af7299c9f"
	Nov 01 09:42:17 addons-171954 kubelet[2532]: I1101 09:42:17.589320    2532 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6d6zp" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:42:31 addons-171954 kubelet[2532]: E1101 09:42:31.595426    2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="f05c6362-0a7c-45d4-9050-a90af7299c9f"
	Nov 01 09:42:43 addons-171954 kubelet[2532]: E1101 09:42:43.601236    2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="f05c6362-0a7c-45d4-9050-a90af7299c9f"
	Nov 01 09:42:58 addons-171954 kubelet[2532]: E1101 09:42:58.891754    2532 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 01 09:42:58 addons-171954 kubelet[2532]: E1101 09:42:58.891816    2532 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 01 09:42:58 addons-171954 kubelet[2532]: E1101 09:42:58.891911    2532 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(f05c6362-0a7c-45d4-9050-a90af7299c9f): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:42:58 addons-171954 kubelet[2532]: E1101 09:42:58.891944    2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="f05c6362-0a7c-45d4-9050-a90af7299c9f"
	Nov 01 09:43:09 addons-171954 kubelet[2532]: E1101 09:43:09.594864    2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="f05c6362-0a7c-45d4-9050-a90af7299c9f"
	Nov 01 09:43:24 addons-171954 kubelet[2532]: E1101 09:43:24.592313    2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="f05c6362-0a7c-45d4-9050-a90af7299c9f"
	Nov 01 09:43:28 addons-171954 kubelet[2532]: I1101 09:43:28.588316    2532 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:43:29 addons-171954 kubelet[2532]: I1101 09:43:29.589161    2532 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6d6zp" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:43:37 addons-171954 kubelet[2532]: E1101 09:43:37.594284    2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="f05c6362-0a7c-45d4-9050-a90af7299c9f"
	Nov 01 09:43:49 addons-171954 kubelet[2532]: E1101 09:43:49.592497    2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="f05c6362-0a7c-45d4-9050-a90af7299c9f"
	Nov 01 09:44:02 addons-171954 kubelet[2532]: E1101 09:44:02.591826    2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="f05c6362-0a7c-45d4-9050-a90af7299c9f"
	Nov 01 09:44:16 addons-171954 kubelet[2532]: E1101 09:44:16.591374    2532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="f05c6362-0a7c-45d4-9050-a90af7299c9f"
	
	
	==> storage-provisioner [e84a69273dd9] <==
	W1101 09:43:58.951516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:00.955696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:00.960875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:02.964785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:02.969544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:04.973434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:04.979278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:06.982768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:06.989925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:08.993489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:09.000668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:11.004820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:11.011174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:13.014244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:13.019585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:15.022724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:15.030548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:17.034676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:17.040294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:19.045380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:19.053197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:21.057170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:21.065365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:23.069787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:23.074833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-171954 -n addons-171954
helpers_test.go:269: (dbg) Run:  kubectl --context addons-171954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: test-local-path
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-171954 describe pod test-local-path
helpers_test.go:290: (dbg) kubectl --context addons-171954 describe pod test-local-path:

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-171954/192.168.39.221
	Start Time:       Sat, 01 Nov 2025 09:41:23 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.35
	IPs:
	  IP:  10.244.0.35
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t9m9z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-t9m9z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m2s                default-scheduler  Successfully assigned default/test-local-path to addons-171954
	  Warning  Failed     2m20s (x3 over 3m)  kubelet            Failed to pull image "busybox:stable": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    88s (x4 over 3m1s)  kubelet            Pulling image "busybox:stable"
	  Warning  Failed     87s (x4 over 3m)    kubelet            Error: ErrImagePull
	  Warning  Failed     87s                 kubelet            Failed to pull image "busybox:stable": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    9s (x11 over 3m)    kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     9s (x11 over 3m)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestAddons/parallel/LocalPath FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-171954 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.739762468s)
--- FAIL: TestAddons/parallel/LocalPath (230.17s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (301.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-498549 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-498549 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-498549 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-498549 --alsologtostderr -v=1] stderr:
I1101 09:49:36.417720  474826 out.go:360] Setting OutFile to fd 1 ...
I1101 09:49:36.418037  474826 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:36.418048  474826 out.go:374] Setting ErrFile to fd 2...
I1101 09:49:36.418052  474826 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:36.418279  474826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
I1101 09:49:36.418547  474826 mustload.go:66] Loading cluster: functional-498549
I1101 09:49:36.418939  474826 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1101 09:49:36.421115  474826 host.go:66] Checking if "functional-498549" exists ...
I1101 09:49:36.421383  474826 api_server.go:166] Checking apiserver status ...
I1101 09:49:36.421439  474826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 09:49:36.424126  474826 main.go:143] libmachine: domain functional-498549 has defined MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:36.424619  474826 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:14:35:97", ip: ""} in network mk-functional-498549: {Iface:virbr1 ExpiryTime:2025-11-01 10:46:29 +0000 UTC Type:0 Mac:52:54:00:14:35:97 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-498549 Clientid:01:52:54:00:14:35:97}
I1101 09:49:36.424648  474826 main.go:143] libmachine: domain functional-498549 has defined IP address 192.168.39.190 and MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:36.424797  474826 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/functional-498549/id_rsa Username:docker}
I1101 09:49:36.524096  474826 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/10812/cgroup
W1101 09:49:36.537729  474826 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/10812/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1101 09:49:36.537838  474826 ssh_runner.go:195] Run: ls
I1101 09:49:36.543155  474826 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8441/healthz ...
I1101 09:49:36.549767  474826 api_server.go:279] https://192.168.39.190:8441/healthz returned 200:
ok
W1101 09:49:36.549823  474826 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1101 09:49:36.549976  474826 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1101 09:49:36.549997  474826 addons.go:70] Setting dashboard=true in profile "functional-498549"
I1101 09:49:36.550008  474826 addons.go:239] Setting addon dashboard=true in "functional-498549"
I1101 09:49:36.550034  474826 host.go:66] Checking if "functional-498549" exists ...
I1101 09:49:36.553477  474826 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1101 09:49:36.554690  474826 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1101 09:49:36.555716  474826 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1101 09:49:36.555741  474826 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1101 09:49:36.558948  474826 main.go:143] libmachine: domain functional-498549 has defined MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:36.559514  474826 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:14:35:97", ip: ""} in network mk-functional-498549: {Iface:virbr1 ExpiryTime:2025-11-01 10:46:29 +0000 UTC Type:0 Mac:52:54:00:14:35:97 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-498549 Clientid:01:52:54:00:14:35:97}
I1101 09:49:36.559551  474826 main.go:143] libmachine: domain functional-498549 has defined IP address 192.168.39.190 and MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:36.559947  474826 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/functional-498549/id_rsa Username:docker}
I1101 09:49:36.681715  474826 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1101 09:49:36.681742  474826 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1101 09:49:36.703972  474826 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1101 09:49:36.704009  474826 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1101 09:49:36.726178  474826 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1101 09:49:36.726218  474826 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1101 09:49:36.748214  474826 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1101 09:49:36.748250  474826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1101 09:49:36.769887  474826 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1101 09:49:36.769919  474826 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1101 09:49:36.800746  474826 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1101 09:49:36.800785  474826 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1101 09:49:36.821937  474826 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1101 09:49:36.821973  474826 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1101 09:49:36.842647  474826 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1101 09:49:36.842677  474826 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1101 09:49:36.864176  474826 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1101 09:49:36.864207  474826 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1101 09:49:36.885048  474826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1101 09:49:37.843913  474826 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-498549 addons enable metrics-server

                                                
                                                
I1101 09:49:37.845011  474826 addons.go:202] Writing out "functional-498549" config to set dashboard=true...
W1101 09:49:37.845292  474826 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1101 09:49:37.846130  474826 kapi.go:59] client config for functional-498549: &rest.Config{Host:"https://192.168.39.190:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.key", CAFile:"/home/jenkins/minikube-integration/21830-464466/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 09:49:37.846580  474826 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1101 09:49:37.846595  474826 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1101 09:49:37.846599  474826 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1101 09:49:37.846603  474826 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1101 09:49:37.846606  474826 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1101 09:49:37.856976  474826 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  b3ab4d92-1ac5-441c-b2e8-cb890063ec25 871 0 2025-11-01 09:49:37 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-11-01 09:49:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.99.39.79,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.99.39.79],IPFamilies:[IPv4],AllocateLoadBalancerNod
ePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1101 09:49:37.857130  474826 out.go:285] * Launching proxy ...
* Launching proxy ...
I1101 09:49:37.857216  474826 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-498549 proxy --port 36195]
I1101 09:49:37.857592  474826 dashboard.go:159] Waiting for kubectl to output host:port ...
I1101 09:49:37.902774  474826 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1101 09:49:37.902858  474826 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1101 09:49:37.912014  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[acc008bd-47e0-416a-9fd4-f0d17166a9b3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc000790ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000142b40 TLS:<nil>}
I1101 09:49:37.912109  474826 retry.go:31] will retry after 101.906µs: Temporary Error: unexpected response code: 503
I1101 09:49:37.916692  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[af783936-d2eb-4a29-a8dc-d139bae3f2ff] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc00081dc00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206b40 TLS:<nil>}
I1101 09:49:37.916792  474826 retry.go:31] will retry after 216.679µs: Temporary Error: unexpected response code: 503
I1101 09:49:37.920529  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c7e8547d-1219-4274-9f83-9142c3c0e6ee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc0015a5d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00092a780 TLS:<nil>}
I1101 09:49:37.920577  474826 retry.go:31] will retry after 221.242µs: Temporary Error: unexpected response code: 503
I1101 09:49:37.924307  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d35e5047-97ca-453e-acca-ba29b8bdc519] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc00081dd40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000142c80 TLS:<nil>}
I1101 09:49:37.924361  474826 retry.go:31] will retry after 242.433µs: Temporary Error: unexpected response code: 503
I1101 09:49:37.930823  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[127fe992-c678-45a7-bdbd-d18afce17638] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc000790bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00092a8c0 TLS:<nil>}
I1101 09:49:37.930885  474826 retry.go:31] will retry after 505.07µs: Temporary Error: unexpected response code: 503
I1101 09:49:37.934736  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f3d69508-7f75-4f3a-a2da-47964877b297] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc00081de40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206c80 TLS:<nil>}
I1101 09:49:37.934787  474826 retry.go:31] will retry after 951.485µs: Temporary Error: unexpected response code: 503
I1101 09:49:37.938120  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c40dae38-5004-4d9c-a9c4-b0b36d9bc136] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc000790cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00092aa00 TLS:<nil>}
I1101 09:49:37.938173  474826 retry.go:31] will retry after 1.524939ms: Temporary Error: unexpected response code: 503
I1101 09:49:37.942439  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28e96e51-ba74-4a49-b3fc-63d6860125e9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc0015a5e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206dc0 TLS:<nil>}
I1101 09:49:37.942479  474826 retry.go:31] will retry after 1.414102ms: Temporary Error: unexpected response code: 503
I1101 09:49:37.947047  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[58aad267-cd52-47db-adf5-96c37f883eae] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc000790dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000142dc0 TLS:<nil>}
I1101 09:49:37.947093  474826 retry.go:31] will retry after 1.285076ms: Temporary Error: unexpected response code: 503
I1101 09:49:37.951459  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9488cfc5-34f1-4587-81f9-2837c89333e4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc0015a5f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206f00 TLS:<nil>}
I1101 09:49:37.951500  474826 retry.go:31] will retry after 3.4307ms: Temporary Error: unexpected response code: 503
I1101 09:49:37.958716  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3365034a-06e1-4fc2-8066-a7f1d3a3bad6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc000790ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000142f00 TLS:<nil>}
I1101 09:49:37.958766  474826 retry.go:31] will retry after 5.138767ms: Temporary Error: unexpected response code: 503
I1101 09:49:37.967476  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ae25954c-1467-4723-8f3a-aacd486b9618] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc0017ac080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207040 TLS:<nil>}
I1101 09:49:37.967535  474826 retry.go:31] will retry after 8.547163ms: Temporary Error: unexpected response code: 503
I1101 09:49:37.980036  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[789a77d1-5d5f-44cf-bef0-543510452be1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc0017ac180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000143180 TLS:<nil>}
I1101 09:49:37.980105  474826 retry.go:31] will retry after 14.491436ms: Temporary Error: unexpected response code: 503
I1101 09:49:37.998313  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bcce37a8-cda3-4ce4-8681-18491092c505] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:37 GMT]] Body:0xc00081df40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001432c0 TLS:<nil>}
I1101 09:49:37.998366  474826 retry.go:31] will retry after 18.562193ms: Temporary Error: unexpected response code: 503
I1101 09:49:38.021345  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ea19c538-56f3-41a2-80d6-ea88804a4e0a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:38 GMT]] Body:0xc0017ac280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00092ab40 TLS:<nil>}
I1101 09:49:38.021438  474826 retry.go:31] will retry after 27.335456ms: Temporary Error: unexpected response code: 503
I1101 09:49:38.055350  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1ef34acf-815c-48e4-be18-ee706f919f78] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:38 GMT]] Body:0xc0017520c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000143400 TLS:<nil>}
I1101 09:49:38.055436  474826 retry.go:31] will retry after 23.184772ms: Temporary Error: unexpected response code: 503
I1101 09:49:38.083716  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9ff70a5f-b048-4e6a-9d2a-ddf944ec144a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:38 GMT]] Body:0xc000b48780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00092ac80 TLS:<nil>}
I1101 09:49:38.083814  474826 retry.go:31] will retry after 97.990911ms: Temporary Error: unexpected response code: 503
I1101 09:49:38.187177  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e39a10fc-595d-47d7-bbbe-7107156e1cdb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:38 GMT]] Body:0xc0017521c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055d400 TLS:<nil>}
I1101 09:49:38.187282  474826 retry.go:31] will retry after 126.194137ms: Temporary Error: unexpected response code: 503
I1101 09:49:38.319306  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a52ffef2-e9ba-448d-b357-fa201130e68d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:38 GMT]] Body:0xc000b48840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00092adc0 TLS:<nil>}
I1101 09:49:38.319422  474826 retry.go:31] will retry after 121.462044ms: Temporary Error: unexpected response code: 503
I1101 09:49:38.448145  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[05fb2fde-23e5-4153-8fed-647b1722bed5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:38 GMT]] Body:0xc000791000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055d540 TLS:<nil>}
I1101 09:49:38.448242  474826 retry.go:31] will retry after 167.25801ms: Temporary Error: unexpected response code: 503
I1101 09:49:38.620138  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ba8fec46-671a-4a70-9241-1297e35cabf8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:38 GMT]] Body:0xc000b48940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207180 TLS:<nil>}
I1101 09:49:38.620228  474826 retry.go:31] will retry after 273.942039ms: Temporary Error: unexpected response code: 503
I1101 09:49:38.897976  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b378480f-dca0-4c36-9b0d-6288516c41f5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:38 GMT]] Body:0xc0017522c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055d680 TLS:<nil>}
I1101 09:49:38.898042  474826 retry.go:31] will retry after 421.781197ms: Temporary Error: unexpected response code: 503
I1101 09:49:39.324476  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[98506d74-f305-4d68-b5cb-055aa87afdb9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:39 GMT]] Body:0xc000b48a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00092af00 TLS:<nil>}
I1101 09:49:39.324566  474826 retry.go:31] will retry after 929.056483ms: Temporary Error: unexpected response code: 503
I1101 09:49:40.258374  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e7c05c04-cfb5-4503-93ce-0961e51fdd3f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:40 GMT]] Body:0xc000b48b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055d7c0 TLS:<nil>}
I1101 09:49:40.258446  474826 retry.go:31] will retry after 1.530019815s: Temporary Error: unexpected response code: 503
I1101 09:49:41.795056  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a2265a08-1b6f-471b-bd4a-a0c15b1f6c13] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:41 GMT]] Body:0xc000791140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00092b040 TLS:<nil>}
I1101 09:49:41.795144  474826 retry.go:31] will retry after 2.487798755s: Temporary Error: unexpected response code: 503
I1101 09:49:44.288258  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ecc04226-f90d-4287-b310-8d8c95da1a38] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:44 GMT]] Body:0xc000b48b80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002072c0 TLS:<nil>}
I1101 09:49:44.288337  474826 retry.go:31] will retry after 2.434353039s: Temporary Error: unexpected response code: 503
I1101 09:49:46.727134  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dc9db790-1dea-4ef6-bdfe-f04221c0b012] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:46 GMT]] Body:0xc000b48c80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055d900 TLS:<nil>}
I1101 09:49:46.727219  474826 retry.go:31] will retry after 4.936717431s: Temporary Error: unexpected response code: 503
I1101 09:49:51.667849  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[22a32a26-bf26-4be6-9516-ff0c150d509e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:51 GMT]] Body:0xc0017ac380 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055da40 TLS:<nil>}
I1101 09:49:51.667942  474826 retry.go:31] will retry after 4.211818765s: Temporary Error: unexpected response code: 503
I1101 09:49:55.885070  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[49304f0d-2ee8-4fac-a511-ca11c849e6c4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:49:55 GMT]] Body:0xc0017ac480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000143540 TLS:<nil>}
I1101 09:49:55.885152  474826 retry.go:31] will retry after 4.840616104s: Temporary Error: unexpected response code: 503
I1101 09:50:00.728923  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c8aadcb6-d39d-4847-99bb-986ad796823e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:50:00 GMT]] Body:0xc000b48d80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207400 TLS:<nil>}
I1101 09:50:00.728991  474826 retry.go:31] will retry after 15.467680706s: Temporary Error: unexpected response code: 503
I1101 09:50:16.203880  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[96483ed8-7e0e-4fd9-b171-85f37b01e43a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:50:16 GMT]] Body:0xc0007912c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000143680 TLS:<nil>}
I1101 09:50:16.203961  474826 retry.go:31] will retry after 12.428304753s: Temporary Error: unexpected response code: 503
I1101 09:50:28.636707  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3dc60af0-c4b1-4a69-996c-5ee6ed46b28a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:50:28 GMT]] Body:0xc0017ac5c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207540 TLS:<nil>}
I1101 09:50:28.636788  474826 retry.go:31] will retry after 14.73438513s: Temporary Error: unexpected response code: 503
I1101 09:50:43.375337  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3ec16c16-c10c-4713-8908-7cea5707d227] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:50:43 GMT]] Body:0xc000b48e40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000143900 TLS:<nil>}
I1101 09:50:43.375421  474826 retry.go:31] will retry after 21.901549557s: Temporary Error: unexpected response code: 503
I1101 09:51:05.283447  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ff5a6d71-a709-408e-936b-fe92cb690f45] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:51:05 GMT]] Body:0xc0007913c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00035a000 TLS:<nil>}
I1101 09:51:05.283532  474826 retry.go:31] will retry after 33.573829049s: Temporary Error: unexpected response code: 503
I1101 09:51:38.861949  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5fbb0845-46be-4f0c-b03d-83f9dcee5963] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:51:38 GMT]] Body:0xc000790200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00092a000 TLS:<nil>}
I1101 09:51:38.862034  474826 retry.go:31] will retry after 1m21.471930304s: Temporary Error: unexpected response code: 503
I1101 09:53:00.340318  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3ff95fe6-fe65-4cc9-883f-1a9d4de8701e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:53:00 GMT]] Body:0xc0017ac0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000142140 TLS:<nil>}
I1101 09:53:00.340494  474826 retry.go:31] will retry after 1m26.064438104s: Temporary Error: unexpected response code: 503
I1101 09:54:26.413526  474826 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b462a049-41ff-49be-92a1-a0ef6cf0109e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:54:26 GMT]] Body:0xc0017ac0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000142280 TLS:<nil>}
I1101 09:54:26.413618  474826 retry.go:31] will retry after 33.89076824s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-498549 -n functional-498549
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-498549 logs -n 25: (1.02630855s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-498549 ssh sudo umount -f /mount-9p                                                                             │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ mount          │ -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount3 --alsologtostderr -v=1         │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ ssh            │ functional-498549 ssh findmnt -T /mount1                                                                                   │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ mount          │ -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount2 --alsologtostderr -v=1         │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ mount          │ -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount1 --alsologtostderr -v=1         │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ ssh            │ functional-498549 ssh findmnt -T /mount1                                                                                   │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh findmnt -T /mount2                                                                                   │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh findmnt -T /mount3                                                                                   │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ mount          │ -p functional-498549 --kill=true                                                                                           │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ cp             │ functional-498549 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh -n functional-498549 sudo cat /home/docker/cp-test.txt                                               │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ cp             │ functional-498549 cp functional-498549:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3581848362/001/cp-test.txt │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh -n functional-498549 sudo cat /home/docker/cp-test.txt                                               │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ cp             │ functional-498549 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh -n functional-498549 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls --format short --alsologtostderr                                                                │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls --format yaml --alsologtostderr                                                                 │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh pgrep buildkitd                                                                                      │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ image          │ functional-498549 image build -t localhost/my-image:functional-498549 testdata/build --alsologtostderr                     │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls                                                                                                 │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls --format json --alsologtostderr                                                                 │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls --format table --alsologtostderr                                                                │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ update-context │ functional-498549 update-context --alsologtostderr -v=2                                                                    │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ update-context │ functional-498549 update-context --alsologtostderr -v=2                                                                    │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ update-context │ functional-498549 update-context --alsologtostderr -v=2                                                                    │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:49:36
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:49:36.297449  474805 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:49:36.297553  474805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:36.297560  474805 out.go:374] Setting ErrFile to fd 2...
	I1101 09:49:36.297567  474805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:36.297765  474805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
	I1101 09:49:36.298180  474805 out.go:368] Setting JSON to false
	I1101 09:49:36.299529  474805 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5515,"bootTime":1761985061,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:49:36.299678  474805 start.go:143] virtualization: kvm guest
	I1101 09:49:36.301354  474805 out.go:179] * [functional-498549] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:49:36.302639  474805 notify.go:221] Checking for updates...
	I1101 09:49:36.302697  474805 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 09:49:36.303852  474805 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:49:36.305068  474805 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig
	I1101 09:49:36.306339  474805 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube
	I1101 09:49:36.311055  474805 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:49:36.312284  474805 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:49:36.313874  474805 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1101 09:49:36.314367  474805 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:49:36.347103  474805 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 09:49:36.348125  474805 start.go:309] selected driver: kvm2
	I1101 09:49:36.348137  474805 start.go:930] validating driver "kvm2" against &{Name:functional-498549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-498549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:49:36.348265  474805 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:49:36.349177  474805 cni.go:84] Creating CNI manager for ""
	I1101 09:49:36.349234  474805 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 09:49:36.349288  474805 start.go:353] cluster config:
	{Name:functional-498549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-498549 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:49:36.350494  474805 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Nov 01 09:49:52 functional-498549 dockerd[8069]: time="2025-11-01T09:49:52.230819638Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:49:54 functional-498549 dockerd[8069]: time="2025-11-01T09:49:54.467267846Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 09:49:54 functional-498549 dockerd[8069]: time="2025-11-01T09:49:54.959620192Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:49:57 functional-498549 dockerd[8069]: time="2025-11-01T09:49:57.463142922Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:49:57 functional-498549 dockerd[8069]: time="2025-11-01T09:49:57.954168585Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:50:20 functional-498549 dockerd[8069]: time="2025-11-01T09:50:20.212694051Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:50:23 functional-498549 dockerd[8069]: time="2025-11-01T09:50:23.226557798Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:50:23 functional-498549 dockerd[8069]: time="2025-11-01T09:50:23.477130973Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:50:23 functional-498549 dockerd[8069]: time="2025-11-01T09:50:23.960280845Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:50:24 functional-498549 dockerd[8069]: time="2025-11-01T09:50:24.460784368Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 09:50:24 functional-498549 dockerd[8069]: time="2025-11-01T09:50:24.943329516Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:51:04 functional-498549 dockerd[8069]: time="2025-11-01T09:51:04.459022936Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:51:05 functional-498549 dockerd[8069]: time="2025-11-01T09:51:05.234540847Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:51:05 functional-498549 cri-dockerd[8971]: time="2025-11-01T09:51:05Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Nov 01 09:51:07 functional-498549 dockerd[8069]: time="2025-11-01T09:51:07.227762909Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:51:17 functional-498549 dockerd[8069]: time="2025-11-01T09:51:17.469873152Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 09:51:17 functional-498549 dockerd[8069]: time="2025-11-01T09:51:17.952236374Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:51:19 functional-498549 dockerd[8069]: time="2025-11-01T09:51:19.198846812Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:52:27 functional-498549 dockerd[8069]: time="2025-11-01T09:52:27.470393255Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:52:27 functional-498549 dockerd[8069]: time="2025-11-01T09:52:27.953238125Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:52:43 functional-498549 dockerd[8069]: time="2025-11-01T09:52:43.533396643Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:52:43 functional-498549 cri-dockerd[8971]: time="2025-11-01T09:52:43Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Nov 01 09:52:47 functional-498549 dockerd[8069]: time="2025-11-01T09:52:47.462660099Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 09:52:47 functional-498549 dockerd[8069]: time="2025-11-01T09:52:47.947443430Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:52:50 functional-498549 dockerd[8069]: time="2025-11-01T09:52:50.232188344Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a5d6bbf5c6518       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   c9803260d4e64       busybox-mount
	0cb10e8086d8b       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   d4457f1a34a97       hello-node-75c85bcc94-7x2w6
	2a29cc98baa49       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   c139178a45c56       hello-node-connect-7d85dfc575-m4bk7
	c2cdeff012b52       52546a367cc9e                                                                                         5 minutes ago       Running             coredns                   3                   f11188eab1c36       coredns-66bc5c9577-s297q
	2440fa540f475       fc25172553d79                                                                                         5 minutes ago       Running             kube-proxy                4                   c582a9a56e2b6       kube-proxy-4vrtg
	5d2dfc5494539       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       4                   02ba7c8a6daa3       storage-provisioner
	85df8e61c49cf       c3994bc696102                                                                                         5 minutes ago       Running             kube-apiserver            0                   1d38edd69a2ba       kube-apiserver-functional-498549
	dd2203b6960ff       5f1f5298c888d                                                                                         5 minutes ago       Running             etcd                      3                   738aa2c164dcd       etcd-functional-498549
	4abd904c8bef8       c80c8dbafe7dd                                                                                         5 minutes ago       Running             kube-controller-manager   4                   df3ae6f2282d3       kube-controller-manager-functional-498549
	fb93558220c0a       7dd6aaa1717ab                                                                                         5 minutes ago       Running             kube-scheduler            4                   527ef2e948d0f       kube-scheduler-functional-498549
	3389091e55d42       c80c8dbafe7dd                                                                                         5 minutes ago       Exited              kube-controller-manager   3                   0b67e9ab7bff4       kube-controller-manager-functional-498549
	6367b452e0bf5       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       3                   1f67feac74f2d       storage-provisioner
	296cd2b455317       fc25172553d79                                                                                         5 minutes ago       Exited              kube-proxy                3                   ce0d0632687fe       kube-proxy-4vrtg
	5f80764b85354       7dd6aaa1717ab                                                                                         5 minutes ago       Exited              kube-scheduler            3                   1a65d981977e2       kube-scheduler-functional-498549
	085f09b49a21f       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   2                   18afadcba9849       coredns-66bc5c9577-s297q
	b17d848698ce0       5f1f5298c888d                                                                                         6 minutes ago       Exited              etcd                      2                   4e1b158191db6       etcd-functional-498549
	
	
	==> coredns [085f09b49a21] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52847 - 51895 "HINFO IN 2719554848982036746.9051042743324441036. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066954158s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c2cdeff012b5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42106 - 17213 "HINFO IN 1324422783965751308.5049648153328789531. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065669623s
	
	
	==> describe nodes <==
	Name:               functional-498549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-498549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=functional-498549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_46_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:46:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-498549
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:54:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:50:02 +0000   Sat, 01 Nov 2025 09:46:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:50:02 +0000   Sat, 01 Nov 2025 09:46:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:50:02 +0000   Sat, 01 Nov 2025 09:46:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:50:02 +0000   Sat, 01 Nov 2025 09:46:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    functional-498549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 609951c8288c4e84a20f1cb18a186ca3
	  System UUID:                609951c8-288c-4e84-a20f-1cb18a186ca3
	  Boot ID:                    1320ff06-1a33-464a-bca4-b742eeebc6db
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7x2w6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  default                     hello-node-connect-7d85dfc575-m4bk7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  default                     mysql-5bb876957f-qjh5x                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m1s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 coredns-66bc5c9577-s297q                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m35s
	  kube-system                 etcd-functional-498549                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m40s
	  kube-system                 kube-apiserver-functional-498549              250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-controller-manager-functional-498549     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m40s
	  kube-system                 kube-proxy-4vrtg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m36s
	  kube-system                 kube-scheduler-functional-498549              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m40s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-qjqz6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6vc78         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m33s                  kube-proxy       
	  Normal  Starting                 5m35s                  kube-proxy       
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m47s (x8 over 7m47s)  kubelet          Node functional-498549 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m47s (x8 over 7m47s)  kubelet          Node functional-498549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m47s (x7 over 7m47s)  kubelet          Node functional-498549 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m40s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m40s                  kubelet          Node functional-498549 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m40s                  kubelet          Node functional-498549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m40s                  kubelet          Node functional-498549 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m39s                  kubelet          Node functional-498549 status is now: NodeReady
	  Normal  RegisteredNode           7m36s                  node-controller  Node functional-498549 event: Registered Node functional-498549 in Controller
	  Normal  Starting                 6m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m37s (x8 over 6m37s)  kubelet          Node functional-498549 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x8 over 6m37s)  kubelet          Node functional-498549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s (x7 over 6m37s)  kubelet          Node functional-498549 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m31s                  node-controller  Node functional-498549 event: Registered Node functional-498549 in Controller
	  Normal  NodeHasNoDiskPressure    5m40s (x8 over 5m40s)  kubelet          Node functional-498549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m40s (x8 over 5m40s)  kubelet          Node functional-498549 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     5m40s (x7 over 5m40s)  kubelet          Node functional-498549 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m33s                  node-controller  Node functional-498549 event: Registered Node functional-498549 in Controller
	
	
	==> dmesg <==
	[  +1.188102] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.111286] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.116631] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.097952] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.137926] kauditd_printk_skb: 165 callbacks suppressed
	[Nov 1 09:47] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.651961] kauditd_printk_skb: 276 callbacks suppressed
	[ +15.165531] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.479605] kauditd_printk_skb: 22 callbacks suppressed
	[  +2.536315] kauditd_printk_skb: 467 callbacks suppressed
	[  +0.000027] kauditd_printk_skb: 95 callbacks suppressed
	[Nov 1 09:48] kauditd_printk_skb: 66 callbacks suppressed
	[  +0.171279] kauditd_printk_skb: 89 callbacks suppressed
	[ +15.170976] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.454126] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.122368] kauditd_printk_skb: 497 callbacks suppressed
	[Nov 1 09:49] kauditd_printk_skb: 159 callbacks suppressed
	[ +16.402319] kauditd_printk_skb: 23 callbacks suppressed
	[  +2.095229] kauditd_printk_skb: 91 callbacks suppressed
	[  +1.345560] kauditd_printk_skb: 116 callbacks suppressed
	[  +0.795522] kauditd_printk_skb: 145 callbacks suppressed
	[  +1.729136] crun[14014]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.000286] kauditd_printk_skb: 190 callbacks suppressed
	
	
	==> etcd [b17d848698ce] <==
	{"level":"warn","ts":"2025-11-01T09:48:02.480849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.489738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.500529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.510598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.521557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.530837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.608072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60906","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:48:40.004100Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T09:48:40.004216Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-498549","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.190:2380"],"advertise-client-urls":["https://192.168.39.190:2379"]}
	{"level":"error","ts":"2025-11-01T09:48:40.004286Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:48:47.009830Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:48:47.009921Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:48:47.010183Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"dc6e2f4e9dcc679a","current-leader-member-id":"dc6e2f4e9dcc679a"}
	{"level":"info","ts":"2025-11-01T09:48:47.010337Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T09:48:47.010348Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-01T09:48:47.011342Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:48:47.011426Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:48:47.011452Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T09:48:47.011618Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.190:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:48:47.011704Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.190:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:48:47.011746Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.190:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:48:47.014152Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.190:2380"}
	{"level":"error","ts":"2025-11-01T09:48:47.014479Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.190:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:48:47.014518Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.190:2380"}
	{"level":"info","ts":"2025-11-01T09:48:47.014527Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-498549","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.190:2380"],"advertise-client-urls":["https://192.168.39.190:2379"]}
	
	
	==> etcd [dd2203b6960f] <==
	{"level":"warn","ts":"2025-11-01T09:48:59.910424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:59.931409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:59.946885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:59.963363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:59.970127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:59.988066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.004480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.043942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.058161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.077579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.092783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.125129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.139488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.155940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.163674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.175386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.189147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.199729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.207359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.220765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.229268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.243183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.249186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.260727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.325355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34030","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:54:37 up 8 min,  0 users,  load average: 0.13, 0.44, 0.31
	Linux functional-498549 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [85df8e61c49c] <==
	E1101 09:49:01.099999       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:49:01.103002       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:49:01.103341       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:49:01.103523       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:49:01.103581       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:49:01.103588       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:49:01.105959       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:49:01.106049       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:49:01.106926       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:49:01.208821       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:49:01.898624       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:49:02.936410       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:49:02.983562       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:49:03.024715       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:49:03.033726       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:49:04.381222       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:49:04.680857       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:49:04.731850       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:49:19.628922       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.149.121"}
	I1101 09:49:25.086837       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.37.209"}
	I1101 09:49:25.711299       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.165.63"}
	I1101 09:49:36.551438       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.213.185"}
	I1101 09:49:37.492241       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:49:37.809701       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.39.79"}
	I1101 09:49:37.830412       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.94.106"}
	
	
	==> kube-controller-manager [3389091e55d4] <==
	
	
	==> kube-controller-manager [4abd904c8bef] <==
	I1101 09:49:04.394774       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:49:04.399038       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:49:04.399123       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:49:04.402483       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:49:04.405870       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:49:04.405877       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:49:04.410237       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:49:04.412726       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:49:04.415761       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:49:04.420118       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:49:04.427583       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:49:04.427590       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:49:04.429024       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:49:04.430113       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:49:04.430275       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:49:04.430334       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:49:04.430663       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:49:04.432636       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	E1101 09:49:37.655781       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.666279       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.666351       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.679494       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.682123       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.688264       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.695133       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [2440fa540f47] <==
	I1101 09:49:01.938675       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:49:02.039605       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:49:02.039691       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.190"]
	E1101 09:49:02.039807       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:49:02.227104       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:49:02.227179       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:49:02.227205       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:49:02.253641       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:49:02.255681       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:49:02.255722       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:49:02.265496       1 config.go:309] "Starting node config controller"
	I1101 09:49:02.265525       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:49:02.265532       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:49:02.265846       1 config.go:200] "Starting service config controller"
	I1101 09:49:02.265872       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:49:02.265888       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:49:02.265891       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:49:02.265901       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:49:02.265904       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:49:02.366053       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:49:02.366514       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:49:02.366535       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [296cd2b45531] <==
	I1101 09:48:53.195792       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:48:53.305077       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1101 09:48:53.310112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-498549&limit=500&resourceVersion=0\": dial tcp 192.168.39.190:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [5f80764b8535] <==
	I1101 09:48:54.510011       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [fb93558220c0] <==
	I1101 09:48:58.639113       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:49:00.949063       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:49:00.949147       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:49:00.949157       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:49:00.949163       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:49:01.020551       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:49:01.021842       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:49:01.024490       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:49:01.024562       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:49:01.025433       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:49:01.025604       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:49:01.124874       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:53:17 functional-498549 kubelet[10543]: E1101 09:53:17.225331   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:53:23 functional-498549 kubelet[10543]: E1101 09:53:23.215725   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:53:26 functional-498549 kubelet[10543]: E1101 09:53:26.213806   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:53:28 functional-498549 kubelet[10543]: E1101 09:53:28.215488   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:53:30 functional-498549 kubelet[10543]: E1101 09:53:30.215215   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:53:35 functional-498549 kubelet[10543]: E1101 09:53:35.216645   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:53:40 functional-498549 kubelet[10543]: E1101 09:53:40.215898   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:53:41 functional-498549 kubelet[10543]: E1101 09:53:41.213836   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:53:43 functional-498549 kubelet[10543]: E1101 09:53:43.214665   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:53:47 functional-498549 kubelet[10543]: E1101 09:53:47.216729   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:53:54 functional-498549 kubelet[10543]: E1101 09:53:54.215795   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:53:55 functional-498549 kubelet[10543]: E1101 09:53:55.218255   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:53:56 functional-498549 kubelet[10543]: E1101 09:53:56.213247   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:54:00 functional-498549 kubelet[10543]: E1101 09:54:00.214809   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:54:06 functional-498549 kubelet[10543]: E1101 09:54:06.215166   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:54:09 functional-498549 kubelet[10543]: E1101 09:54:09.217591   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:54:11 functional-498549 kubelet[10543]: E1101 09:54:11.221201   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:54:12 functional-498549 kubelet[10543]: E1101 09:54:12.215178   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:54:17 functional-498549 kubelet[10543]: E1101 09:54:17.217283   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:54:23 functional-498549 kubelet[10543]: E1101 09:54:23.214094   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:54:24 functional-498549 kubelet[10543]: E1101 09:54:24.214924   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:54:25 functional-498549 kubelet[10543]: E1101 09:54:25.215314   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:54:30 functional-498549 kubelet[10543]: E1101 09:54:30.214842   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:54:34 functional-498549 kubelet[10543]: E1101 09:54:34.213178   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:54:37 functional-498549 kubelet[10543]: E1101 09:54:37.219103   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	
	
	==> storage-provisioner [5d2dfc549453] <==
	W1101 09:54:12.753013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:14.756299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:14.764741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:16.767901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:16.773271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:18.776795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:18.786940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:20.790652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:20.796049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:22.800255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:22.808863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:24.812928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:24.820653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:26.824080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:26.829218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:28.833185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:28.840515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:30.844789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:30.850043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:32.853371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:32.860905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:34.865916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:34.871278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:36.878508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:54:36.886961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6367b452e0bf] <==
	I1101 09:48:53.150717       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:48:53.155637       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-498549 -n functional-498549
helpers_test.go:269: (dbg) Run:  kubectl --context functional-498549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-qjh5x sp-pod dashboard-metrics-scraper-77bf4d6c4c-qjqz6 kubernetes-dashboard-855c9754f9-6vc78
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-498549 describe pod busybox-mount mysql-5bb876957f-qjh5x sp-pod dashboard-metrics-scraper-77bf4d6c4c-qjqz6 kubernetes-dashboard-855c9754f9-6vc78
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-498549 describe pod busybox-mount mysql-5bb876957f-qjh5x sp-pod dashboard-metrics-scraper-77bf4d6c4c-qjqz6 kubernetes-dashboard-855c9754f9-6vc78: exit status 1 (94.161473ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498549/192.168.39.190
	Start Time:       Sat, 01 Nov 2025 09:49:32 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://a5d6bbf5c6518d250be962a65d23dc9f7168ad483825e4a14c41e415b12faa38
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 09:49:35 +0000
	      Finished:     Sat, 01 Nov 2025 09:49:35 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wgmbq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wgmbq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5m6s  default-scheduler  Successfully assigned default/busybox-mount to functional-498549
	  Normal  Pulling    5m6s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m3s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.192s (2.192s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m3s  kubelet            Created container: mount-munger
	  Normal  Started    5m3s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-qjh5x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498549/192.168.39.190
	Start Time:       Sat, 01 Nov 2025 09:49:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mdt8m (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mdt8m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m2s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-qjh5x to functional-498549
	  Normal   Pulling    109s (x5 over 5m1s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     108s (x5 over 5m)    kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     108s (x5 over 5m)    kubelet            Error: ErrImagePull
	  Warning  Failed     55s (x15 over 5m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    8s (x19 over 5m)     kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498549/192.168.39.190
	Start Time:       Sat, 01 Nov 2025 09:49:33 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9whqj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-9whqj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m5s                   default-scheduler  Successfully assigned default/sp-pod to functional-498549
	  Warning  Failed     3m31s (x3 over 4m48s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    116s (x5 over 5m5s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     115s (x2 over 5m2s)    kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     115s (x5 over 5m2s)    kubelet            Error: ErrImagePull
	  Warning  Failed     57s (x15 over 5m1s)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4s (x19 over 5m1s)     kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-qjqz6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-6vc78" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-498549 describe pod busybox-mount mysql-5bb876957f-qjh5x sp-pod dashboard-metrics-scraper-77bf4d6c4c-qjqz6 kubernetes-dashboard-855c9754f9-6vc78: exit status 1
E1101 09:54:56.335988  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:55:24.041686  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (301.77s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (370.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f70c1125-5b71-4923-a6cd-22393a71798e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004939342s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-498549 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-498549 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-498549 get pvc myclaim -o=json
I1101 09:49:30.932483  468355 retry.go:31] will retry after 2.306845806s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:8fd84a27-d34c-420b-879a-3466ec8ad99c ResourceVersion:770 Generation:0 CreationTimestamp:2025-11-01 09:49:30 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-8fd84a27-d34c-420b-879a-3466ec8ad99c StorageClassName:0xc0019907f0 VolumeMode:0xc001990800 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-498549 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-498549 apply -f testdata/storage-provisioner/pod.yaml
I1101 09:49:33.421869  468355 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4a9e3196-d5c6-4e17-87a4-d12b937b8f11] Pending
helpers_test.go:352: "sp-pod" [4a9e3196-d5c6-4e17-87a4-d12b937b8f11] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-498549 -n functional-498549
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-11-01 09:55:33.644883589 +0000 UTC m=+1176.710427220
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-498549 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-498549 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-498549/192.168.39.190
Start Time:       Sat, 01 Nov 2025 09:49:33 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:  10.244.0.12
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9whqj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-9whqj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-498549
Warning  Failed     4m26s (x3 over 5m43s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m51s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2m50s (x2 over 5m57s)  kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m50s (x5 over 5m57s)  kubelet            Error: ErrImagePull
Warning  Failed     47s (x20 over 5m56s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    34s (x21 over 5m56s)   kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-498549 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-498549 logs sp-pod -n default: exit status 1 (82.307022ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-498549 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-498549 -n functional-498549
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 logs -n 25
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-498549 ssh sudo umount -f /mount-9p                                                                             │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ mount          │ -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount3 --alsologtostderr -v=1         │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ ssh            │ functional-498549 ssh findmnt -T /mount1                                                                                   │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ mount          │ -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount2 --alsologtostderr -v=1         │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ mount          │ -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount1 --alsologtostderr -v=1         │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ ssh            │ functional-498549 ssh findmnt -T /mount1                                                                                   │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh findmnt -T /mount2                                                                                   │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh findmnt -T /mount3                                                                                   │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ mount          │ -p functional-498549 --kill=true                                                                                           │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ cp             │ functional-498549 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh -n functional-498549 sudo cat /home/docker/cp-test.txt                                               │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ cp             │ functional-498549 cp functional-498549:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3581848362/001/cp-test.txt │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh -n functional-498549 sudo cat /home/docker/cp-test.txt                                               │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ cp             │ functional-498549 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh -n functional-498549 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls --format short --alsologtostderr                                                                │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls --format yaml --alsologtostderr                                                                 │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh pgrep buildkitd                                                                                      │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ image          │ functional-498549 image build -t localhost/my-image:functional-498549 testdata/build --alsologtostderr                     │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls                                                                                                 │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls --format json --alsologtostderr                                                                 │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls --format table --alsologtostderr                                                                │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ update-context │ functional-498549 update-context --alsologtostderr -v=2                                                                    │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ update-context │ functional-498549 update-context --alsologtostderr -v=2                                                                    │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ update-context │ functional-498549 update-context --alsologtostderr -v=2                                                                    │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:49:36
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:49:36.297449  474805 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:49:36.297553  474805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:36.297560  474805 out.go:374] Setting ErrFile to fd 2...
	I1101 09:49:36.297567  474805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:36.297765  474805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
	I1101 09:49:36.298180  474805 out.go:368] Setting JSON to false
	I1101 09:49:36.299529  474805 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5515,"bootTime":1761985061,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:49:36.299678  474805 start.go:143] virtualization: kvm guest
	I1101 09:49:36.301354  474805 out.go:179] * [functional-498549] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:49:36.302639  474805 notify.go:221] Checking for updates...
	I1101 09:49:36.302697  474805 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 09:49:36.303852  474805 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:49:36.305068  474805 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig
	I1101 09:49:36.306339  474805 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube
	I1101 09:49:36.311055  474805 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:49:36.312284  474805 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:49:36.313874  474805 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1101 09:49:36.314367  474805 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:49:36.347103  474805 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 09:49:36.348125  474805 start.go:309] selected driver: kvm2
	I1101 09:49:36.348137  474805 start.go:930] validating driver "kvm2" against &{Name:functional-498549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-498549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:49:36.348265  474805 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:49:36.349177  474805 cni.go:84] Creating CNI manager for ""
	I1101 09:49:36.349234  474805 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 09:49:36.349288  474805 start.go:353] cluster config:
	{Name:functional-498549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-498549 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:49:36.350494  474805 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Nov 01 09:49:57 functional-498549 dockerd[8069]: time="2025-11-01T09:49:57.954168585Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:50:20 functional-498549 dockerd[8069]: time="2025-11-01T09:50:20.212694051Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:50:23 functional-498549 dockerd[8069]: time="2025-11-01T09:50:23.226557798Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:50:23 functional-498549 dockerd[8069]: time="2025-11-01T09:50:23.477130973Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:50:23 functional-498549 dockerd[8069]: time="2025-11-01T09:50:23.960280845Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:50:24 functional-498549 dockerd[8069]: time="2025-11-01T09:50:24.460784368Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 09:50:24 functional-498549 dockerd[8069]: time="2025-11-01T09:50:24.943329516Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:51:04 functional-498549 dockerd[8069]: time="2025-11-01T09:51:04.459022936Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:51:05 functional-498549 dockerd[8069]: time="2025-11-01T09:51:05.234540847Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:51:05 functional-498549 cri-dockerd[8971]: time="2025-11-01T09:51:05Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Nov 01 09:51:07 functional-498549 dockerd[8069]: time="2025-11-01T09:51:07.227762909Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:51:17 functional-498549 dockerd[8069]: time="2025-11-01T09:51:17.469873152Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 09:51:17 functional-498549 dockerd[8069]: time="2025-11-01T09:51:17.952236374Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:51:19 functional-498549 dockerd[8069]: time="2025-11-01T09:51:19.198846812Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:52:27 functional-498549 dockerd[8069]: time="2025-11-01T09:52:27.470393255Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:52:27 functional-498549 dockerd[8069]: time="2025-11-01T09:52:27.953238125Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:52:43 functional-498549 dockerd[8069]: time="2025-11-01T09:52:43.533396643Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:52:43 functional-498549 cri-dockerd[8971]: time="2025-11-01T09:52:43Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Nov 01 09:52:47 functional-498549 dockerd[8069]: time="2025-11-01T09:52:47.462660099Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 09:52:47 functional-498549 dockerd[8069]: time="2025-11-01T09:52:47.947443430Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:52:50 functional-498549 dockerd[8069]: time="2025-11-01T09:52:50.232188344Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:55:22 functional-498549 dockerd[8069]: time="2025-11-01T09:55:22.464827797Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:55:23 functional-498549 dockerd[8069]: time="2025-11-01T09:55:23.279828380Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:55:23 functional-498549 cri-dockerd[8971]: time="2025-11-01T09:55:23Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Nov 01 09:55:26 functional-498549 dockerd[8069]: time="2025-11-01T09:55:26.219041319Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a5d6bbf5c6518       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   c9803260d4e64       busybox-mount
	0cb10e8086d8b       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   d4457f1a34a97       hello-node-75c85bcc94-7x2w6
	2a29cc98baa49       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   c139178a45c56       hello-node-connect-7d85dfc575-m4bk7
	c2cdeff012b52       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   3                   f11188eab1c36       coredns-66bc5c9577-s297q
	2440fa540f475       fc25172553d79                                                                                         6 minutes ago       Running             kube-proxy                4                   c582a9a56e2b6       kube-proxy-4vrtg
	5d2dfc5494539       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       4                   02ba7c8a6daa3       storage-provisioner
	85df8e61c49cf       c3994bc696102                                                                                         6 minutes ago       Running             kube-apiserver            0                   1d38edd69a2ba       kube-apiserver-functional-498549
	dd2203b6960ff       5f1f5298c888d                                                                                         6 minutes ago       Running             etcd                      3                   738aa2c164dcd       etcd-functional-498549
	4abd904c8bef8       c80c8dbafe7dd                                                                                         6 minutes ago       Running             kube-controller-manager   4                   df3ae6f2282d3       kube-controller-manager-functional-498549
	fb93558220c0a       7dd6aaa1717ab                                                                                         6 minutes ago       Running             kube-scheduler            4                   527ef2e948d0f       kube-scheduler-functional-498549
	3389091e55d42       c80c8dbafe7dd                                                                                         6 minutes ago       Exited              kube-controller-manager   3                   0b67e9ab7bff4       kube-controller-manager-functional-498549
	6367b452e0bf5       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       3                   1f67feac74f2d       storage-provisioner
	296cd2b455317       fc25172553d79                                                                                         6 minutes ago       Exited              kube-proxy                3                   ce0d0632687fe       kube-proxy-4vrtg
	5f80764b85354       7dd6aaa1717ab                                                                                         6 minutes ago       Exited              kube-scheduler            3                   1a65d981977e2       kube-scheduler-functional-498549
	085f09b49a21f       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   2                   18afadcba9849       coredns-66bc5c9577-s297q
	b17d848698ce0       5f1f5298c888d                                                                                         7 minutes ago       Exited              etcd                      2                   4e1b158191db6       etcd-functional-498549
	
	
	==> coredns [085f09b49a21] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52847 - 51895 "HINFO IN 2719554848982036746.9051042743324441036. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066954158s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c2cdeff012b5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42106 - 17213 "HINFO IN 1324422783965751308.5049648153328789531. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065669623s
	
	
	==> describe nodes <==
	Name:               functional-498549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-498549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=functional-498549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_46_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:46:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-498549
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:55:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:50:02 +0000   Sat, 01 Nov 2025 09:46:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:50:02 +0000   Sat, 01 Nov 2025 09:46:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:50:02 +0000   Sat, 01 Nov 2025 09:46:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:50:02 +0000   Sat, 01 Nov 2025 09:46:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    functional-498549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 609951c8288c4e84a20f1cb18a186ca3
	  System UUID:                609951c8-288c-4e84-a20f-1cb18a186ca3
	  Boot ID:                    1320ff06-1a33-464a-bca4-b742eeebc6db
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7x2w6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  default                     hello-node-connect-7d85dfc575-m4bk7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  default                     mysql-5bb876957f-qjh5x                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m58s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-s297q                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m32s
	  kube-system                 etcd-functional-498549                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m37s
	  kube-system                 kube-apiserver-functional-498549              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-controller-manager-functional-498549     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-proxy-4vrtg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-scheduler-functional-498549              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-qjqz6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6vc78         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m30s                  kube-proxy       
	  Normal  Starting                 6m32s                  kube-proxy       
	  Normal  Starting                 7m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m44s (x8 over 8m44s)  kubelet          Node functional-498549 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m44s (x8 over 8m44s)  kubelet          Node functional-498549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m44s (x7 over 8m44s)  kubelet          Node functional-498549 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m37s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m37s                  kubelet          Node functional-498549 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s                  kubelet          Node functional-498549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s                  kubelet          Node functional-498549 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m36s                  kubelet          Node functional-498549 status is now: NodeReady
	  Normal  RegisteredNode           8m33s                  node-controller  Node functional-498549 event: Registered Node functional-498549 in Controller
	  Normal  Starting                 7m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m34s (x8 over 7m34s)  kubelet          Node functional-498549 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m34s (x8 over 7m34s)  kubelet          Node functional-498549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m34s (x7 over 7m34s)  kubelet          Node functional-498549 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m28s                  node-controller  Node functional-498549 event: Registered Node functional-498549 in Controller
	  Normal  NodeHasNoDiskPressure    6m37s (x8 over 6m37s)  kubelet          Node functional-498549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  6m37s (x8 over 6m37s)  kubelet          Node functional-498549 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m37s (x7 over 6m37s)  kubelet          Node functional-498549 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m30s                  node-controller  Node functional-498549 event: Registered Node functional-498549 in Controller
	
	
	==> dmesg <==
	[  +1.188102] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.111286] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.116631] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.097952] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.137926] kauditd_printk_skb: 165 callbacks suppressed
	[Nov 1 09:47] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.651961] kauditd_printk_skb: 276 callbacks suppressed
	[ +15.165531] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.479605] kauditd_printk_skb: 22 callbacks suppressed
	[  +2.536315] kauditd_printk_skb: 467 callbacks suppressed
	[  +0.000027] kauditd_printk_skb: 95 callbacks suppressed
	[Nov 1 09:48] kauditd_printk_skb: 66 callbacks suppressed
	[  +0.171279] kauditd_printk_skb: 89 callbacks suppressed
	[ +15.170976] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.454126] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.122368] kauditd_printk_skb: 497 callbacks suppressed
	[Nov 1 09:49] kauditd_printk_skb: 159 callbacks suppressed
	[ +16.402319] kauditd_printk_skb: 23 callbacks suppressed
	[  +2.095229] kauditd_printk_skb: 91 callbacks suppressed
	[  +1.345560] kauditd_printk_skb: 116 callbacks suppressed
	[  +0.795522] kauditd_printk_skb: 145 callbacks suppressed
	[  +1.729136] crun[14014]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.000286] kauditd_printk_skb: 190 callbacks suppressed
	
	
	==> etcd [b17d848698ce] <==
	{"level":"warn","ts":"2025-11-01T09:48:02.480849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.489738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.500529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.510598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.521557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.530837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.608072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60906","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:48:40.004100Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T09:48:40.004216Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-498549","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.190:2380"],"advertise-client-urls":["https://192.168.39.190:2379"]}
	{"level":"error","ts":"2025-11-01T09:48:40.004286Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:48:47.009830Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:48:47.009921Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:48:47.010183Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"dc6e2f4e9dcc679a","current-leader-member-id":"dc6e2f4e9dcc679a"}
	{"level":"info","ts":"2025-11-01T09:48:47.010337Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T09:48:47.010348Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-01T09:48:47.011342Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:48:47.011426Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:48:47.011452Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T09:48:47.011618Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.190:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:48:47.011704Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.190:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:48:47.011746Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.190:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:48:47.014152Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.190:2380"}
	{"level":"error","ts":"2025-11-01T09:48:47.014479Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.190:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:48:47.014518Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.190:2380"}
	{"level":"info","ts":"2025-11-01T09:48:47.014527Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-498549","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.190:2380"],"advertise-client-urls":["https://192.168.39.190:2379"]}
	
	
	==> etcd [dd2203b6960f] <==
	{"level":"warn","ts":"2025-11-01T09:48:59.910424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:59.931409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:59.946885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:59.963363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:59.970127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:59.988066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.004480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.043942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.058161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.077579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.092783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.125129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.139488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.155940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.163674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.175386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.189147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.199729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.207359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.220765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.229268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.243183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.249186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.260727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.325355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34030","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:55:34 up 9 min,  0 users,  load average: 0.04, 0.36, 0.28
	Linux functional-498549 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [85df8e61c49c] <==
	E1101 09:49:01.099999       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:49:01.103002       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:49:01.103341       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:49:01.103523       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:49:01.103581       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:49:01.103588       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:49:01.105959       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:49:01.106049       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:49:01.106926       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:49:01.208821       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:49:01.898624       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:49:02.936410       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:49:02.983562       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:49:03.024715       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:49:03.033726       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:49:04.381222       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:49:04.680857       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:49:04.731850       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:49:19.628922       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.149.121"}
	I1101 09:49:25.086837       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.37.209"}
	I1101 09:49:25.711299       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.165.63"}
	I1101 09:49:36.551438       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.213.185"}
	I1101 09:49:37.492241       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:49:37.809701       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.39.79"}
	I1101 09:49:37.830412       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.94.106"}
	
	
	==> kube-controller-manager [3389091e55d4] <==
	
	
	==> kube-controller-manager [4abd904c8bef] <==
	I1101 09:49:04.394774       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:49:04.399038       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:49:04.399123       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:49:04.402483       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:49:04.405870       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:49:04.405877       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:49:04.410237       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:49:04.412726       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:49:04.415761       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:49:04.420118       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:49:04.427583       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:49:04.427590       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:49:04.429024       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:49:04.430113       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:49:04.430275       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:49:04.430334       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:49:04.430663       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:49:04.432636       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	E1101 09:49:37.655781       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.666279       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.666351       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.679494       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.682123       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.688264       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.695133       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [2440fa540f47] <==
	I1101 09:49:01.938675       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:49:02.039605       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:49:02.039691       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.190"]
	E1101 09:49:02.039807       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:49:02.227104       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:49:02.227179       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:49:02.227205       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:49:02.253641       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:49:02.255681       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:49:02.255722       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:49:02.265496       1 config.go:309] "Starting node config controller"
	I1101 09:49:02.265525       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:49:02.265532       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:49:02.265846       1 config.go:200] "Starting service config controller"
	I1101 09:49:02.265872       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:49:02.265888       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:49:02.265891       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:49:02.265901       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:49:02.265904       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:49:02.366053       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:49:02.366514       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:49:02.366535       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [296cd2b45531] <==
	I1101 09:48:53.195792       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:48:53.305077       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1101 09:48:53.310112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-498549&limit=500&resourceVersion=0\": dial tcp 192.168.39.190:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [5f80764b8535] <==
	I1101 09:48:54.510011       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [fb93558220c0] <==
	I1101 09:48:58.639113       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:49:00.949063       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:49:00.949147       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:49:00.949157       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:49:00.949163       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:49:01.020551       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:49:01.021842       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:49:01.024490       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:49:01.024562       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:49:01.025433       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:49:01.025604       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:49:01.124874       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:54:30 functional-498549 kubelet[10543]: E1101 09:54:30.214842   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:54:34 functional-498549 kubelet[10543]: E1101 09:54:34.213178   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:54:37 functional-498549 kubelet[10543]: E1101 09:54:37.219103   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:54:39 functional-498549 kubelet[10543]: E1101 09:54:39.217023   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:54:45 functional-498549 kubelet[10543]: E1101 09:54:45.216188   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:54:46 functional-498549 kubelet[10543]: E1101 09:54:46.212844   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:54:50 functional-498549 kubelet[10543]: E1101 09:54:50.215843   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:54:54 functional-498549 kubelet[10543]: E1101 09:54:54.215578   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:54:57 functional-498549 kubelet[10543]: E1101 09:54:57.221523   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:54:59 functional-498549 kubelet[10543]: E1101 09:54:59.212820   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:55:03 functional-498549 kubelet[10543]: E1101 09:55:03.215403   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:55:07 functional-498549 kubelet[10543]: E1101 09:55:07.216949   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:55:12 functional-498549 kubelet[10543]: E1101 09:55:12.215740   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:55:14 functional-498549 kubelet[10543]: E1101 09:55:14.213424   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:55:14 functional-498549 kubelet[10543]: E1101 09:55:14.217575   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:55:23 functional-498549 kubelet[10543]: E1101 09:55:23.283495   10543 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:55:23 functional-498549 kubelet[10543]: E1101 09:55:23.283544   10543 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:55:23 functional-498549 kubelet[10543]: E1101 09:55:23.283621   10543 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-qjqz6_kubernetes-dashboard(02da92fd-785b-4075-afa2-c8efcf6a51ea): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:55:23 functional-498549 kubelet[10543]: E1101 09:55:23.283653   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:55:24 functional-498549 kubelet[10543]: E1101 09:55:24.215350   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:55:26 functional-498549 kubelet[10543]: E1101 09:55:26.218148   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:55:26 functional-498549 kubelet[10543]: E1101 09:55:26.223360   10543 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 09:55:26 functional-498549 kubelet[10543]: E1101 09:55:26.223502   10543 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 09:55:26 functional-498549 kubelet[10543]: E1101 09:55:26.223742   10543 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(4a9e3196-d5c6-4e17-87a4-d12b937b8f11): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:55:26 functional-498549 kubelet[10543]: E1101 09:55:26.223810   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	
	
	==> storage-provisioner [5d2dfc549453] <==
	W1101 09:55:09.050949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:11.054574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:11.060083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:13.065026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:13.070926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:15.074852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:15.082131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:17.085717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:17.092546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:19.095545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:19.103595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:21.108517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:21.114079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:23.119882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:23.126948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:25.130515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:25.136933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:27.140864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:27.146347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:29.149317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:29.154894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:31.159397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:31.164931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:33.168441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:33.173781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6367b452e0bf] <==
	I1101 09:48:53.150717       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:48:53.155637       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-498549 -n functional-498549
helpers_test.go:269: (dbg) Run:  kubectl --context functional-498549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-qjh5x sp-pod dashboard-metrics-scraper-77bf4d6c4c-qjqz6 kubernetes-dashboard-855c9754f9-6vc78
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-498549 describe pod busybox-mount mysql-5bb876957f-qjh5x sp-pod dashboard-metrics-scraper-77bf4d6c4c-qjqz6 kubernetes-dashboard-855c9754f9-6vc78
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-498549 describe pod busybox-mount mysql-5bb876957f-qjh5x sp-pod dashboard-metrics-scraper-77bf4d6c4c-qjqz6 kubernetes-dashboard-855c9754f9-6vc78: exit status 1 (89.2862ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498549/192.168.39.190
	Start Time:       Sat, 01 Nov 2025 09:49:32 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://a5d6bbf5c6518d250be962a65d23dc9f7168ad483825e4a14c41e415b12faa38
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 09:49:35 +0000
	      Finished:     Sat, 01 Nov 2025 09:49:35 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wgmbq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wgmbq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6m3s  default-scheduler  Successfully assigned default/busybox-mount to functional-498549
	  Normal  Pulling    6m3s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.192s (2.192s including waiting). Image size: 4403845 bytes.
	  Normal  Created    6m    kubelet            Created container: mount-munger
	  Normal  Started    6m    kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-qjh5x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498549/192.168.39.190
	Start Time:       Sat, 01 Nov 2025 09:49:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mdt8m (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mdt8m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m59s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-qjh5x to functional-498549
	  Normal   Pulling    2m46s (x5 over 5m58s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m45s (x5 over 5m57s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m45s (x5 over 5m57s)  kubelet            Error: ErrImagePull
	  Warning  Failed     50s (x20 over 5m57s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    38s (x21 over 5m57s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498549/192.168.39.190
	Start Time:       Sat, 01 Nov 2025 09:49:33 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9whqj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-9whqj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m2s                   default-scheduler  Successfully assigned default/sp-pod to functional-498549
	  Warning  Failed     4m28s (x3 over 5m45s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m53s (x5 over 6m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m52s (x2 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m52s (x5 over 5m59s)  kubelet            Error: ErrImagePull
	  Warning  Failed     49s (x20 over 5m58s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    36s (x21 over 5m58s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-qjqz6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-6vc78" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-498549 describe pod busybox-mount mysql-5bb876957f-qjh5x sp-pod dashboard-metrics-scraper-77bf4d6c4c-qjqz6 kubernetes-dashboard-855c9754f9-6vc78: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (370.74s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-498549 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-qjh5x" [b8019bd1-737b-49a5-85db-4db6c8dfa405] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-498549 -n functional-498549
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-11-01 09:59:36.830282635 +0000 UTC m=+1419.895826273
functional_test.go:1804: (dbg) Run:  kubectl --context functional-498549 describe po mysql-5bb876957f-qjh5x -n default
functional_test.go:1804: (dbg) kubectl --context functional-498549 describe po mysql-5bb876957f-qjh5x -n default:
Name:             mysql-5bb876957f-qjh5x
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-498549/192.168.39.190
Start Time:       Sat, 01 Nov 2025 09:49:36 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:           10.244.0.13
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mdt8m (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mdt8m:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-5bb876957f-qjh5x to functional-498549
Normal   Pulling    6m47s (x5 over 9m59s)   kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     6m46s (x5 over 9m58s)   kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m46s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Warning  Failed     4m51s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m39s (x21 over 9m58s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-498549 logs mysql-5bb876957f-qjh5x -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-498549 logs mysql-5bb876957f-qjh5x -n default: exit status 1 (69.419305ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-qjh5x" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-498549 logs mysql-5bb876957f-qjh5x -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-498549 -n functional-498549
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-498549 logs -n 25: (1.005753528s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-498549 ssh sudo umount -f /mount-9p                                                                             │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ mount          │ -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount3 --alsologtostderr -v=1         │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ ssh            │ functional-498549 ssh findmnt -T /mount1                                                                                   │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ mount          │ -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount2 --alsologtostderr -v=1         │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ mount          │ -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount1 --alsologtostderr -v=1         │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ ssh            │ functional-498549 ssh findmnt -T /mount1                                                                                   │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh findmnt -T /mount2                                                                                   │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh findmnt -T /mount3                                                                                   │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ mount          │ -p functional-498549 --kill=true                                                                                           │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ cp             │ functional-498549 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh -n functional-498549 sudo cat /home/docker/cp-test.txt                                               │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ cp             │ functional-498549 cp functional-498549:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3581848362/001/cp-test.txt │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh -n functional-498549 sudo cat /home/docker/cp-test.txt                                               │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ cp             │ functional-498549 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh -n functional-498549 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls --format short --alsologtostderr                                                                │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls --format yaml --alsologtostderr                                                                 │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ ssh            │ functional-498549 ssh pgrep buildkitd                                                                                      │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ image          │ functional-498549 image build -t localhost/my-image:functional-498549 testdata/build --alsologtostderr                     │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls                                                                                                 │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls --format json --alsologtostderr                                                                 │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ image          │ functional-498549 image ls --format table --alsologtostderr                                                                │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ update-context │ functional-498549 update-context --alsologtostderr -v=2                                                                    │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ update-context │ functional-498549 update-context --alsologtostderr -v=2                                                                    │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ update-context │ functional-498549 update-context --alsologtostderr -v=2                                                                    │ functional-498549 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:49:36
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:49:36.297449  474805 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:49:36.297553  474805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:36.297560  474805 out.go:374] Setting ErrFile to fd 2...
	I1101 09:49:36.297567  474805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:36.297765  474805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
	I1101 09:49:36.298180  474805 out.go:368] Setting JSON to false
	I1101 09:49:36.299529  474805 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5515,"bootTime":1761985061,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:49:36.299678  474805 start.go:143] virtualization: kvm guest
	I1101 09:49:36.301354  474805 out.go:179] * [functional-498549] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:49:36.302639  474805 notify.go:221] Checking for updates...
	I1101 09:49:36.302697  474805 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 09:49:36.303852  474805 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:49:36.305068  474805 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig
	I1101 09:49:36.306339  474805 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube
	I1101 09:49:36.311055  474805 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:49:36.312284  474805 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:49:36.313874  474805 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1101 09:49:36.314367  474805 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:49:36.347103  474805 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 09:49:36.348125  474805 start.go:309] selected driver: kvm2
	I1101 09:49:36.348137  474805 start.go:930] validating driver "kvm2" against &{Name:functional-498549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-498549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:49:36.348265  474805 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:49:36.349177  474805 cni.go:84] Creating CNI manager for ""
	I1101 09:49:36.349234  474805 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 09:49:36.349288  474805 start.go:353] cluster config:
	{Name:functional-498549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-498549 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:49:36.350494  474805 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Nov 01 09:50:23 functional-498549 dockerd[8069]: time="2025-11-01T09:50:23.477130973Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:50:23 functional-498549 dockerd[8069]: time="2025-11-01T09:50:23.960280845Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:50:24 functional-498549 dockerd[8069]: time="2025-11-01T09:50:24.460784368Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 09:50:24 functional-498549 dockerd[8069]: time="2025-11-01T09:50:24.943329516Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:51:04 functional-498549 dockerd[8069]: time="2025-11-01T09:51:04.459022936Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:51:05 functional-498549 dockerd[8069]: time="2025-11-01T09:51:05.234540847Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:51:05 functional-498549 cri-dockerd[8971]: time="2025-11-01T09:51:05Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Nov 01 09:51:07 functional-498549 dockerd[8069]: time="2025-11-01T09:51:07.227762909Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:51:17 functional-498549 dockerd[8069]: time="2025-11-01T09:51:17.469873152Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 09:51:17 functional-498549 dockerd[8069]: time="2025-11-01T09:51:17.952236374Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:51:19 functional-498549 dockerd[8069]: time="2025-11-01T09:51:19.198846812Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:52:27 functional-498549 dockerd[8069]: time="2025-11-01T09:52:27.470393255Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:52:27 functional-498549 dockerd[8069]: time="2025-11-01T09:52:27.953238125Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:52:43 functional-498549 dockerd[8069]: time="2025-11-01T09:52:43.533396643Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:52:43 functional-498549 cri-dockerd[8971]: time="2025-11-01T09:52:43Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Nov 01 09:52:47 functional-498549 dockerd[8069]: time="2025-11-01T09:52:47.462660099Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 09:52:47 functional-498549 dockerd[8069]: time="2025-11-01T09:52:47.947443430Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:52:50 functional-498549 dockerd[8069]: time="2025-11-01T09:52:50.232188344Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:55:22 functional-498549 dockerd[8069]: time="2025-11-01T09:55:22.464827797Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:55:23 functional-498549 dockerd[8069]: time="2025-11-01T09:55:23.279828380Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:55:23 functional-498549 cri-dockerd[8971]: time="2025-11-01T09:55:23Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Nov 01 09:55:26 functional-498549 dockerd[8069]: time="2025-11-01T09:55:26.219041319Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:55:39 functional-498549 dockerd[8069]: time="2025-11-01T09:55:39.225556222Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 09:55:39 functional-498549 dockerd[8069]: time="2025-11-01T09:55:39.474170966Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 09:55:39 functional-498549 dockerd[8069]: time="2025-11-01T09:55:39.956602048Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a5d6bbf5c6518       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   c9803260d4e64       busybox-mount
	0cb10e8086d8b       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   d4457f1a34a97       hello-node-75c85bcc94-7x2w6
	2a29cc98baa49       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   c139178a45c56       hello-node-connect-7d85dfc575-m4bk7
	c2cdeff012b52       52546a367cc9e                                                                                         10 minutes ago      Running             coredns                   3                   f11188eab1c36       coredns-66bc5c9577-s297q
	2440fa540f475       fc25172553d79                                                                                         10 minutes ago      Running             kube-proxy                4                   c582a9a56e2b6       kube-proxy-4vrtg
	5d2dfc5494539       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       4                   02ba7c8a6daa3       storage-provisioner
	85df8e61c49cf       c3994bc696102                                                                                         10 minutes ago      Running             kube-apiserver            0                   1d38edd69a2ba       kube-apiserver-functional-498549
	dd2203b6960ff       5f1f5298c888d                                                                                         10 minutes ago      Running             etcd                      3                   738aa2c164dcd       etcd-functional-498549
	4abd904c8bef8       c80c8dbafe7dd                                                                                         10 minutes ago      Running             kube-controller-manager   4                   df3ae6f2282d3       kube-controller-manager-functional-498549
	fb93558220c0a       7dd6aaa1717ab                                                                                         10 minutes ago      Running             kube-scheduler            4                   527ef2e948d0f       kube-scheduler-functional-498549
	3389091e55d42       c80c8dbafe7dd                                                                                         10 minutes ago      Exited              kube-controller-manager   3                   0b67e9ab7bff4       kube-controller-manager-functional-498549
	6367b452e0bf5       6e38f40d628db                                                                                         10 minutes ago      Exited              storage-provisioner       3                   1f67feac74f2d       storage-provisioner
	296cd2b455317       fc25172553d79                                                                                         10 minutes ago      Exited              kube-proxy                3                   ce0d0632687fe       kube-proxy-4vrtg
	5f80764b85354       7dd6aaa1717ab                                                                                         10 minutes ago      Exited              kube-scheduler            3                   1a65d981977e2       kube-scheduler-functional-498549
	085f09b49a21f       52546a367cc9e                                                                                         11 minutes ago      Exited              coredns                   2                   18afadcba9849       coredns-66bc5c9577-s297q
	b17d848698ce0       5f1f5298c888d                                                                                         11 minutes ago      Exited              etcd                      2                   4e1b158191db6       etcd-functional-498549
	
	
	==> coredns [085f09b49a21] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52847 - 51895 "HINFO IN 2719554848982036746.9051042743324441036. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066954158s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c2cdeff012b5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42106 - 17213 "HINFO IN 1324422783965751308.5049648153328789531. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065669623s
	
	
	==> describe nodes <==
	Name:               functional-498549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-498549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=functional-498549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_46_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:46:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-498549
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:59:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:55:38 +0000   Sat, 01 Nov 2025 09:46:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:55:38 +0000   Sat, 01 Nov 2025 09:46:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:55:38 +0000   Sat, 01 Nov 2025 09:46:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:55:38 +0000   Sat, 01 Nov 2025 09:46:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    functional-498549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 609951c8288c4e84a20f1cb18a186ca3
	  System UUID:                609951c8-288c-4e84-a20f-1cb18a186ca3
	  Boot ID:                    1320ff06-1a33-464a-bca4-b742eeebc6db
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7x2w6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-m4bk7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-qjh5x                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-s297q                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-498549                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-498549              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-498549     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4vrtg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-498549              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-qjqz6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6vc78         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-498549 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-498549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-498549 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-498549 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-498549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-498549 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node functional-498549 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-498549 event: Registered Node functional-498549 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-498549 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-498549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-498549 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-498549 event: Registered Node functional-498549 in Controller
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-498549 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-498549 status is now: NodeHasSufficientMemory
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-498549 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-498549 event: Registered Node functional-498549 in Controller
	
	
	==> dmesg <==
	[  +1.188102] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.111286] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.116631] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.097952] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.137926] kauditd_printk_skb: 165 callbacks suppressed
	[Nov 1 09:47] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.651961] kauditd_printk_skb: 276 callbacks suppressed
	[ +15.165531] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.479605] kauditd_printk_skb: 22 callbacks suppressed
	[  +2.536315] kauditd_printk_skb: 467 callbacks suppressed
	[  +0.000027] kauditd_printk_skb: 95 callbacks suppressed
	[Nov 1 09:48] kauditd_printk_skb: 66 callbacks suppressed
	[  +0.171279] kauditd_printk_skb: 89 callbacks suppressed
	[ +15.170976] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.454126] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.122368] kauditd_printk_skb: 497 callbacks suppressed
	[Nov 1 09:49] kauditd_printk_skb: 159 callbacks suppressed
	[ +16.402319] kauditd_printk_skb: 23 callbacks suppressed
	[  +2.095229] kauditd_printk_skb: 91 callbacks suppressed
	[  +1.345560] kauditd_printk_skb: 116 callbacks suppressed
	[  +0.795522] kauditd_printk_skb: 145 callbacks suppressed
	[  +1.729136] crun[14014]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.000286] kauditd_printk_skb: 190 callbacks suppressed
	
	
	==> etcd [b17d848698ce] <==
	{"level":"warn","ts":"2025-11-01T09:48:02.480849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.489738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.500529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.510598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.521557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.530837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:02.608072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60906","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:48:40.004100Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T09:48:40.004216Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-498549","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.190:2380"],"advertise-client-urls":["https://192.168.39.190:2379"]}
	{"level":"error","ts":"2025-11-01T09:48:40.004286Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:48:47.009830Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:48:47.009921Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:48:47.010183Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"dc6e2f4e9dcc679a","current-leader-member-id":"dc6e2f4e9dcc679a"}
	{"level":"info","ts":"2025-11-01T09:48:47.010337Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T09:48:47.010348Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-01T09:48:47.011342Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:48:47.011426Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:48:47.011452Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T09:48:47.011618Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.190:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:48:47.011704Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.190:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:48:47.011746Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.190:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:48:47.014152Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.190:2380"}
	{"level":"error","ts":"2025-11-01T09:48:47.014479Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.190:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:48:47.014518Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.190:2380"}
	{"level":"info","ts":"2025-11-01T09:48:47.014527Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-498549","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.190:2380"],"advertise-client-urls":["https://192.168.39.190:2379"]}
	
	
	==> etcd [dd2203b6960f] <==
	{"level":"warn","ts":"2025-11-01T09:48:59.963363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:59.970127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:48:59.988066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.004480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.043942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.058161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.077579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.092783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.125129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.139488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.155940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.163674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.175386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.189147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.199729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.207359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.220765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.229268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.243183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.249186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.260727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:49:00.325355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34030","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:58:59.289457Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1315}
	{"level":"info","ts":"2025-11-01T09:58:59.314186Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1315,"took":"23.735253ms","hash":2034768371,"current-db-size-bytes":3801088,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1937408,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-11-01T09:58:59.314289Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2034768371,"revision":1315,"compact-revision":-1}
	
	
	==> kernel <==
	 09:59:37 up 13 min,  0 users,  load average: 0.02, 0.18, 0.23
	Linux functional-498549 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [85df8e61c49c] <==
	I1101 09:49:01.103002       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:49:01.103341       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:49:01.103523       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:49:01.103581       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:49:01.103588       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:49:01.105959       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:49:01.106049       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:49:01.106926       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:49:01.208821       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:49:01.898624       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:49:02.936410       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:49:02.983562       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:49:03.024715       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:49:03.033726       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:49:04.381222       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:49:04.680857       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:49:04.731850       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:49:19.628922       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.149.121"}
	I1101 09:49:25.086837       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.37.209"}
	I1101 09:49:25.711299       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.165.63"}
	I1101 09:49:36.551438       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.213.185"}
	I1101 09:49:37.492241       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:49:37.809701       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.39.79"}
	I1101 09:49:37.830412       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.94.106"}
	I1101 09:59:01.013537       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3389091e55d4] <==
	
	
	==> kube-controller-manager [4abd904c8bef] <==
	I1101 09:49:04.394774       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:49:04.399038       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:49:04.399123       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:49:04.402483       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:49:04.405870       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:49:04.405877       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:49:04.410237       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:49:04.412726       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:49:04.415761       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:49:04.420118       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:49:04.427583       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:49:04.427590       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:49:04.429024       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:49:04.430113       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:49:04.430275       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:49:04.430334       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:49:04.430663       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:49:04.432636       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	E1101 09:49:37.655781       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.666279       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.666351       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.679494       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.682123       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.688264       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:49:37.695133       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [2440fa540f47] <==
	I1101 09:49:01.938675       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:49:02.039605       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:49:02.039691       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.190"]
	E1101 09:49:02.039807       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:49:02.227104       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:49:02.227179       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:49:02.227205       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:49:02.253641       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:49:02.255681       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:49:02.255722       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:49:02.265496       1 config.go:309] "Starting node config controller"
	I1101 09:49:02.265525       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:49:02.265532       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:49:02.265846       1 config.go:200] "Starting service config controller"
	I1101 09:49:02.265872       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:49:02.265888       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:49:02.265891       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:49:02.265901       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:49:02.265904       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:49:02.366053       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:49:02.366514       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:49:02.366535       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [296cd2b45531] <==
	I1101 09:48:53.195792       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:48:53.305077       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1101 09:48:53.310112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-498549&limit=500&resourceVersion=0\": dial tcp 192.168.39.190:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [5f80764b8535] <==
	I1101 09:48:54.510011       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [fb93558220c0] <==
	I1101 09:48:58.639113       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:49:00.949063       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:49:00.949147       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:49:00.949157       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:49:00.949163       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:49:01.020551       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:49:01.021842       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:49:01.024490       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:49:01.024562       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:49:01.025433       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:49:01.025604       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:49:01.124874       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:58:18 functional-498549 kubelet[10543]: E1101 09:58:18.212850   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:58:22 functional-498549 kubelet[10543]: E1101 09:58:22.215728   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:58:23 functional-498549 kubelet[10543]: E1101 09:58:23.216244   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:58:28 functional-498549 kubelet[10543]: E1101 09:58:28.215841   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:58:29 functional-498549 kubelet[10543]: E1101 09:58:29.221807   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:58:33 functional-498549 kubelet[10543]: E1101 09:58:33.217413   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:58:37 functional-498549 kubelet[10543]: E1101 09:58:37.216525   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:58:40 functional-498549 kubelet[10543]: E1101 09:58:40.213526   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:58:43 functional-498549 kubelet[10543]: E1101 09:58:43.217359   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:58:48 functional-498549 kubelet[10543]: E1101 09:58:48.215691   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:58:49 functional-498549 kubelet[10543]: E1101 09:58:49.214864   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:58:53 functional-498549 kubelet[10543]: E1101 09:58:53.212658   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:58:54 functional-498549 kubelet[10543]: E1101 09:58:54.217345   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:59:00 functional-498549 kubelet[10543]: E1101 09:59:00.215909   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:59:04 functional-498549 kubelet[10543]: E1101 09:59:04.215400   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:59:07 functional-498549 kubelet[10543]: E1101 09:59:07.213899   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:59:09 functional-498549 kubelet[10543]: E1101 09:59:09.215299   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:59:15 functional-498549 kubelet[10543]: E1101 09:59:15.217167   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:59:19 functional-498549 kubelet[10543]: E1101 09:59:19.214511   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:59:19 functional-498549 kubelet[10543]: E1101 09:59:19.217452   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:59:22 functional-498549 kubelet[10543]: E1101 09:59:22.216106   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	Nov 01 09:59:30 functional-498549 kubelet[10543]: E1101 09:59:30.215155   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qjqz6" podUID="02da92fd-785b-4075-afa2-c8efcf6a51ea"
	Nov 01 09:59:31 functional-498549 kubelet[10543]: E1101 09:59:31.216260   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qjh5x" podUID="b8019bd1-737b-49a5-85db-4db6c8dfa405"
	Nov 01 09:59:32 functional-498549 kubelet[10543]: E1101 09:59:32.213323   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="4a9e3196-d5c6-4e17-87a4-d12b937b8f11"
	Nov 01 09:59:36 functional-498549 kubelet[10543]: E1101 09:59:36.214679   10543 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6vc78" podUID="a57177cd-6b59-44c6-9385-d3d685e7a5a4"
	
	
	==> storage-provisioner [5d2dfc549453] <==
	W1101 09:59:12.302687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:14.307759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:14.316276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:16.319649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:16.325203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:18.329144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:18.337167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:20.340699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:20.346750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:22.350937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:22.357451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:24.362375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:24.369290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:26.373959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:26.379254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:28.383665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:28.392044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:30.394818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:30.400469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:32.404465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:32.409377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:34.413119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:34.418681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:36.422362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:36.429596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6367b452e0bf] <==
	I1101 09:48:53.150717       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:48:53.155637       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-498549 -n functional-498549
helpers_test.go:269: (dbg) Run:  kubectl --context functional-498549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-qjh5x sp-pod dashboard-metrics-scraper-77bf4d6c4c-qjqz6 kubernetes-dashboard-855c9754f9-6vc78
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-498549 describe pod busybox-mount mysql-5bb876957f-qjh5x sp-pod dashboard-metrics-scraper-77bf4d6c4c-qjqz6 kubernetes-dashboard-855c9754f9-6vc78
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-498549 describe pod busybox-mount mysql-5bb876957f-qjh5x sp-pod dashboard-metrics-scraper-77bf4d6c4c-qjqz6 kubernetes-dashboard-855c9754f9-6vc78: exit status 1 (95.996034ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498549/192.168.39.190
	Start Time:       Sat, 01 Nov 2025 09:49:32 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://a5d6bbf5c6518d250be962a65d23dc9f7168ad483825e4a14c41e415b12faa38
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 09:49:35 +0000
	      Finished:     Sat, 01 Nov 2025 09:49:35 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wgmbq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wgmbq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-498549
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.192s (2.192s including waiting). Image size: 4403845 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-qjh5x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498549/192.168.39.190
	Start Time:       Sat, 01 Nov 2025 09:49:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mdt8m (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mdt8m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-qjh5x to functional-498549
	  Normal   Pulling    6m49s (x5 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     6m48s (x5 over 10m)   kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m48s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m53s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m41s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-498549/192.168.39.190
	Start Time:       Sat, 01 Nov 2025 09:49:33 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9whqj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-9whqj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-498549
	  Warning  Failed     8m31s (x3 over 9m48s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m56s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     6m55s (x2 over 10m)    kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m55s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m52s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m39s (x21 over 10m)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-qjqz6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-6vc78" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-498549 describe pod busybox-mount mysql-5bb876957f-qjh5x sp-pod dashboard-metrics-scraper-77bf4d6c4c-qjqz6 kubernetes-dashboard-855c9754f9-6vc78: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.18s)

                                                
                                    

Test pass (326/364)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 21.36
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 10.03
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.17
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.65
22 TestOffline 85.27
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 206.14
29 TestAddons/serial/Volcano 43.53
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 10.57
35 TestAddons/parallel/Registry 18.51
36 TestAddons/parallel/RegistryCreds 0.74
37 TestAddons/parallel/Ingress 22.22
38 TestAddons/parallel/InspektorGadget 5.19
39 TestAddons/parallel/MetricsServer 5.99
41 TestAddons/parallel/CSI 55.46
42 TestAddons/parallel/Headlamp 24.41
43 TestAddons/parallel/CloudSpanner 5.48
45 TestAddons/parallel/NvidiaDevicePlugin 5.49
46 TestAddons/parallel/Yakd 10.83
48 TestAddons/StoppedEnableDisable 13.87
49 TestCertOptions 84.54
50 TestCertExpiration 345.72
51 TestDockerFlags 45.5
52 TestForceSystemdFlag 90.58
53 TestForceSystemdEnv 58.45
58 TestErrorSpam/setup 41.1
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.7
61 TestErrorSpam/pause 1.27
62 TestErrorSpam/unpause 1.61
63 TestErrorSpam/stop 5.53
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 62.64
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 58.88
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.92
75 TestFunctional/serial/CacheCmd/cache/add_local 1.4
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.05
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 55.62
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 0.97
86 TestFunctional/serial/LogsFileCmd 0.99
87 TestFunctional/serial/InvalidService 4.57
89 TestFunctional/parallel/ConfigCmd 0.44
91 TestFunctional/parallel/DryRun 0.24
92 TestFunctional/parallel/InternationalLanguage 0.12
93 TestFunctional/parallel/StatusCmd 0.8
97 TestFunctional/parallel/ServiceCmdConnect 9.41
98 TestFunctional/parallel/AddonsCmd 0.16
101 TestFunctional/parallel/SSHCmd 0.34
102 TestFunctional/parallel/CpCmd 0.97
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.07
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.19
113 TestFunctional/parallel/License 0.44
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.42
116 TestFunctional/parallel/DockerEnv/bash 0.75
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.17
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.18
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.17
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.56
122 TestFunctional/parallel/ImageCommands/Setup 1.76
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
133 TestFunctional/parallel/ProfileCmd/profile_list 0.34
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
135 TestFunctional/parallel/ServiceCmd/DeployApp 9.18
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.97
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.72
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.51
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.34
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.54
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
143 TestFunctional/parallel/MountCmd/any-port 7.84
144 TestFunctional/parallel/ServiceCmd/List 0.28
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
150 TestFunctional/parallel/ServiceCmd/Format 0.26
151 TestFunctional/parallel/ServiceCmd/URL 0.26
152 TestFunctional/parallel/MountCmd/specific-port 1.41
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.13
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
158 TestGvisorAddon 135.55
161 TestMultiControlPlane/serial/StartCluster 241.6
162 TestMultiControlPlane/serial/DeployApp 6.42
163 TestMultiControlPlane/serial/PingHostFromPods 1.48
164 TestMultiControlPlane/serial/AddWorkerNode 70.84
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.68
167 TestMultiControlPlane/serial/CopyFile 11.13
168 TestMultiControlPlane/serial/StopSecondaryNode 14.5
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.53
170 TestMultiControlPlane/serial/RestartSecondaryNode 25.57
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 173.59
173 TestMultiControlPlane/serial/DeleteSecondaryNode 7.3
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
175 TestMultiControlPlane/serial/StopCluster 39.71
176 TestMultiControlPlane/serial/RestartCluster 109.8
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
178 TestMultiControlPlane/serial/AddSecondaryNode 86.26
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.7
182 TestImageBuild/serial/Setup 42.65
183 TestImageBuild/serial/NormalBuild 1.77
184 TestImageBuild/serial/BuildWithBuildArg 1
185 TestImageBuild/serial/BuildWithDockerIgnore 0.67
186 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1
191 TestJSONOutput/start/Command 60.89
192 TestJSONOutput/start/Audit 0
194 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/pause/Command 0.61
198 TestJSONOutput/pause/Audit 0
200 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/unpause/Command 0.55
204 TestJSONOutput/unpause/Audit 0
206 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/stop/Command 6.7
210 TestJSONOutput/stop/Audit 0
212 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
214 TestErrorJSONOutput 0.25
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 89.09
223 TestMountStart/serial/StartWithMountFirst 25.78
224 TestMountStart/serial/VerifyMountFirst 0.32
225 TestMountStart/serial/StartWithMountSecond 25.79
226 TestMountStart/serial/VerifyMountSecond 0.31
227 TestMountStart/serial/DeleteFirst 0.73
228 TestMountStart/serial/VerifyMountPostDelete 0.32
229 TestMountStart/serial/Stop 1.28
230 TestMountStart/serial/RestartStopped 20.22
231 TestMountStart/serial/VerifyMountPostStop 0.31
234 TestMultiNode/serial/FreshStart2Nodes 119.19
235 TestMultiNode/serial/DeployApp2Nodes 5.1
236 TestMultiNode/serial/PingHostFrom2Pods 0.93
237 TestMultiNode/serial/AddNode 49.65
238 TestMultiNode/serial/MultiNodeLabels 0.06
239 TestMultiNode/serial/ProfileList 0.45
240 TestMultiNode/serial/CopyFile 6.09
241 TestMultiNode/serial/StopNode 2.42
242 TestMultiNode/serial/StartAfterStop 41.3
243 TestMultiNode/serial/RestartKeepsNodes 154.74
244 TestMultiNode/serial/DeleteNode 2.15
245 TestMultiNode/serial/StopMultiNode 26.39
246 TestMultiNode/serial/RestartMultiNode 90.67
247 TestMultiNode/serial/ValidateNameConflict 45.5
252 TestPreload 204.57
254 TestScheduledStopUnix 113.61
255 TestSkaffold 127.2
258 TestRunningBinaryUpgrade 130.51
260 TestKubernetesUpgrade 189.06
280 TestStoppedBinaryUpgrade/Setup 2.67
281 TestStoppedBinaryUpgrade/Upgrade 94.83
283 TestPause/serial/Start 59.93
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
285 TestPause/serial/SecondStartNoReconfiguration 56.51
287 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
288 TestNoKubernetes/serial/StartWithK8s 48.48
289 TestNoKubernetes/serial/StartWithStopK8s 21.62
290 TestPause/serial/Pause 0.76
291 TestPause/serial/VerifyStatus 0.3
292 TestPause/serial/Unpause 0.8
293 TestPause/serial/PauseAgain 0.87
294 TestPause/serial/DeletePaused 0.88
295 TestPause/serial/VerifyDeletedResources 0.7
296 TestISOImage/Setup 28.05
297 TestNoKubernetes/serial/Start 54.74
299 TestISOImage/Binaries/crictl 0.23
300 TestISOImage/Binaries/curl 0.32
301 TestISOImage/Binaries/docker 0.23
302 TestISOImage/Binaries/git 0.23
303 TestISOImage/Binaries/iptables 0.22
304 TestISOImage/Binaries/podman 0.22
305 TestISOImage/Binaries/rsync 0.22
306 TestISOImage/Binaries/socat 0.23
307 TestISOImage/Binaries/wget 0.21
308 TestISOImage/Binaries/VBoxControl 0.22
309 TestISOImage/Binaries/VBoxService 0.22
310 TestNetworkPlugins/group/auto/Start 100.96
311 TestNetworkPlugins/group/kindnet/Start 128.69
312 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
313 TestNoKubernetes/serial/ProfileList 1.32
314 TestNoKubernetes/serial/Stop 1.39
315 TestNoKubernetes/serial/StartNoArgs 57.66
316 TestNetworkPlugins/group/calico/Start 141.34
317 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
318 TestNetworkPlugins/group/auto/KubeletFlags 0.19
319 TestNetworkPlugins/group/auto/NetCatPod 16.3
320 TestNetworkPlugins/group/custom-flannel/Start 89.29
321 TestNetworkPlugins/group/auto/DNS 0.17
322 TestNetworkPlugins/group/auto/Localhost 0.13
323 TestNetworkPlugins/group/auto/HairPin 0.14
324 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
325 TestNetworkPlugins/group/false/Start 74.57
326 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
327 TestNetworkPlugins/group/kindnet/NetCatPod 33.25
328 TestNetworkPlugins/group/kindnet/DNS 0.22
329 TestNetworkPlugins/group/kindnet/Localhost 0.2
330 TestNetworkPlugins/group/kindnet/HairPin 0.2
331 TestNetworkPlugins/group/calico/ControllerPod 6.01
332 TestNetworkPlugins/group/enable-default-cni/Start 70.56
333 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
334 TestNetworkPlugins/group/calico/KubeletFlags 0.21
335 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.37
336 TestNetworkPlugins/group/calico/NetCatPod 13.33
337 TestNetworkPlugins/group/custom-flannel/DNS 0.17
338 TestNetworkPlugins/group/calico/DNS 0.21
339 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
340 TestNetworkPlugins/group/calico/Localhost 0.16
341 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
342 TestNetworkPlugins/group/calico/HairPin 0.17
343 TestNetworkPlugins/group/false/KubeletFlags 0.19
344 TestNetworkPlugins/group/false/NetCatPod 11.26
345 TestNetworkPlugins/group/false/DNS 0.21
346 TestNetworkPlugins/group/false/Localhost 0.19
347 TestNetworkPlugins/group/false/HairPin 0.18
348 TestNetworkPlugins/group/flannel/Start 70.02
349 TestNetworkPlugins/group/bridge/Start 86.85
350 TestNetworkPlugins/group/kubenet/Start 119.57
351 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
352 TestNetworkPlugins/group/enable-default-cni/NetCatPod 20.27
353 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
354 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
355 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
356 TestNetworkPlugins/group/flannel/ControllerPod 5.17
358 TestStartStop/group/old-k8s-version/serial/FirstStart 65.17
359 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
360 TestNetworkPlugins/group/flannel/NetCatPod 13.48
361 TestNetworkPlugins/group/flannel/DNS 0.16
362 TestNetworkPlugins/group/flannel/Localhost 0.15
363 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
364 TestNetworkPlugins/group/flannel/HairPin 0.18
365 TestNetworkPlugins/group/bridge/NetCatPod 12.36
366 TestNetworkPlugins/group/bridge/DNS 0.19
367 TestNetworkPlugins/group/bridge/Localhost 0.13
368 TestNetworkPlugins/group/bridge/HairPin 0.16
370 TestStartStop/group/no-preload/serial/FirstStart 79.36
372 TestStartStop/group/embed-certs/serial/FirstStart 78.51
373 TestNetworkPlugins/group/kubenet/KubeletFlags 0.22
374 TestNetworkPlugins/group/kubenet/NetCatPod 14.28
375 TestStartStop/group/old-k8s-version/serial/DeployApp 11.39
376 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.4
377 TestNetworkPlugins/group/kubenet/DNS 0.2
378 TestNetworkPlugins/group/kubenet/Localhost 0.19
379 TestNetworkPlugins/group/kubenet/HairPin 0.19
380 TestStartStop/group/old-k8s-version/serial/Stop 13.99
381 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
382 TestStartStop/group/old-k8s-version/serial/SecondStart 45.91
384 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.05
385 TestStartStop/group/no-preload/serial/DeployApp 11.38
386 TestStartStop/group/embed-certs/serial/DeployApp 10.33
387 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
388 TestStartStop/group/no-preload/serial/Stop 14.68
389 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.29
390 TestStartStop/group/embed-certs/serial/Stop 14.75
391 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 13.01
392 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
393 TestStartStop/group/no-preload/serial/SecondStart 48.57
394 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
395 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
396 TestStartStop/group/embed-certs/serial/SecondStart 58.53
397 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
398 TestStartStop/group/old-k8s-version/serial/Pause 2.84
400 TestStartStop/group/newest-cni/serial/FirstStart 74.91
401 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.33
402 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
403 TestStartStop/group/default-k8s-diff-port/serial/Stop 14.14
404 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
405 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
406 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
407 TestStartStop/group/no-preload/serial/Pause 3.21
408 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
409 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.73
411 TestISOImage/PersistentMounts//data 0.2
412 TestISOImage/PersistentMounts//var/lib/docker 0.2
413 TestISOImage/PersistentMounts//var/lib/cni 0.49
414 TestISOImage/PersistentMounts//var/lib/kubelet 0.32
415 TestISOImage/PersistentMounts//var/lib/minikube 0.2
416 TestISOImage/PersistentMounts//var/lib/toolbox 0.47
417 TestISOImage/PersistentMounts//var/lib/boot2docker 0.49
418 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
419 TestISOImage/eBPFSupport 0.19
420 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
421 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
422 TestStartStop/group/embed-certs/serial/Pause 3.11
423 TestStartStop/group/newest-cni/serial/DeployApp 0
424 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.91
425 TestStartStop/group/newest-cni/serial/Stop 13.57
426 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.29
427 TestStartStop/group/newest-cni/serial/SecondStart 32.86
428 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
429 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.08
430 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.19
431 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.57
432 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
433 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
434 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.19
435 TestStartStop/group/newest-cni/serial/Pause 2.27
x
+
TestDownloadOnly/v1.28.0/json-events (21.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-430749 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-430749 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 : (21.361073945s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (21.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 09:36:18.336005  468355 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1101 09:36:18.336114  468355 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-464466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-430749
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-430749: exit status 85 (78.849002ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-430749 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-430749 │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:35:57
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:35:57.030470  468366 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:35:57.031246  468366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:35:57.031264  468366 out.go:374] Setting ErrFile to fd 2...
	I1101 09:35:57.031271  468366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:35:57.031835  468366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
	W1101 09:35:57.032151  468366 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21830-464466/.minikube/config/config.json: open /home/jenkins/minikube-integration/21830-464466/.minikube/config/config.json: no such file or directory
	I1101 09:35:57.032670  468366 out.go:368] Setting JSON to true
	I1101 09:35:57.033549  468366 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4696,"bootTime":1761985061,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:35:57.033645  468366 start.go:143] virtualization: kvm guest
	I1101 09:35:57.035764  468366 out.go:99] [download-only-430749] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1101 09:35:57.035911  468366 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21830-464466/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 09:35:57.035963  468366 notify.go:221] Checking for updates...
	I1101 09:35:57.037260  468366 out.go:171] MINIKUBE_LOCATION=21830
	I1101 09:35:57.038637  468366 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:35:57.040097  468366 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig
	I1101 09:35:57.045124  468366 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube
	I1101 09:35:57.046461  468366 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 09:35:57.048819  468366 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 09:35:57.049175  468366 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:35:57.081295  468366 out.go:99] Using the kvm2 driver based on user configuration
	I1101 09:35:57.081326  468366 start.go:309] selected driver: kvm2
	I1101 09:35:57.081332  468366 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:35:57.081663  468366 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:35:57.082167  468366 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1101 09:35:57.082327  468366 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:35:57.082367  468366 cni.go:84] Creating CNI manager for ""
	I1101 09:35:57.082437  468366 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 09:35:57.082446  468366 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 09:35:57.082489  468366 start.go:353] cluster config:
	{Name:download-only-430749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-430749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:35:57.082682  468366 iso.go:125] acquiring lock: {Name:mk3fea4fe84098591e9ecbbeb78880fff096fc58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:35:57.084283  468366 out.go:99] Downloading VM boot image ...
	I1101 09:35:57.084323  468366 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21830-464466/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 09:36:07.456859  468366 out.go:99] Starting "download-only-430749" primary control-plane node in "download-only-430749" cluster
	I1101 09:36:07.456931  468366 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1101 09:36:07.552724  468366 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1101 09:36:07.552768  468366 cache.go:59] Caching tarball of preloaded images
	I1101 09:36:07.553012  468366 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1101 09:36:07.554716  468366 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 09:36:07.554738  468366 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1101 09:36:07.657975  468366 preload.go:290] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1101 09:36:07.658128  468366 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/21830-464466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-430749 host does not exist
	  To start a cluster, run: "minikube start -p download-only-430749"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-430749
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (10.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-238613 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-238613 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=kvm2 : (10.033229678s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (10.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 09:36:28.768929  468355 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1101 09:36:28.769001  468355 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-464466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-238613
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-238613: exit status 85 (75.026152ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-430749 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-430749 │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ delete  │ --all                                                                                                                                           │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ delete  │ -p download-only-430749                                                                                                                         │ download-only-430749 │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ start   │ -o=json --download-only -p download-only-238613 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=kvm2 │ download-only-238613 │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:36:18
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:36:18.790109  468588 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:36:18.790416  468588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:36:18.790426  468588 out.go:374] Setting ErrFile to fd 2...
	I1101 09:36:18.790430  468588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:36:18.790610  468588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
	I1101 09:36:18.791108  468588 out.go:368] Setting JSON to true
	I1101 09:36:18.792076  468588 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4718,"bootTime":1761985061,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:36:18.792179  468588 start.go:143] virtualization: kvm guest
	I1101 09:36:18.794023  468588 out.go:99] [download-only-238613] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:36:18.794160  468588 notify.go:221] Checking for updates...
	I1101 09:36:18.795337  468588 out.go:171] MINIKUBE_LOCATION=21830
	I1101 09:36:18.796761  468588 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:36:18.798135  468588 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig
	I1101 09:36:18.799565  468588 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube
	I1101 09:36:18.800819  468588 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 09:36:18.803083  468588 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 09:36:18.803323  468588 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:36:18.834990  468588 out.go:99] Using the kvm2 driver based on user configuration
	I1101 09:36:18.835018  468588 start.go:309] selected driver: kvm2
	I1101 09:36:18.835024  468588 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:36:18.835341  468588 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:36:18.835850  468588 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1101 09:36:18.836023  468588 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:36:18.836067  468588 cni.go:84] Creating CNI manager for ""
	I1101 09:36:18.836123  468588 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 09:36:18.836141  468588 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 09:36:18.836188  468588 start.go:353] cluster config:
	{Name:download-only-238613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-238613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:36:18.836280  468588 iso.go:125] acquiring lock: {Name:mk3fea4fe84098591e9ecbbeb78880fff096fc58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:36:18.837511  468588 out.go:99] Starting "download-only-238613" primary control-plane node in "download-only-238613" cluster
	I1101 09:36:18.837529  468588 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1101 09:36:19.288480  468588 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1101 09:36:19.288524  468588 cache.go:59] Caching tarball of preloaded images
	I1101 09:36:19.288701  468588 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1101 09:36:19.290551  468588 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1101 09:36:19.290582  468588 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1101 09:36:19.389519  468588 preload.go:290] Got checksum from GCS API "d7f0ccd752ff15c628c6fc8ef8c8033e"
	I1101 09:36:19.389580  468588 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4?checksum=md5:d7f0ccd752ff15c628c6fc8ef8c8033e -> /home/jenkins/minikube-integration/21830-464466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-238613 host does not exist
	  To start a cluster, run: "minikube start -p download-only-238613"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-238613
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 09:36:29.467930  468355 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-218698 --alsologtostderr --binary-mirror http://127.0.0.1:38615 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-218698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-218698
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (85.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-180165 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-180165 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 : (1m24.280167341s)
helpers_test.go:175: Cleaning up "offline-docker-180165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-180165
--- PASS: TestOffline (85.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-171954
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-171954: exit status 85 (67.382328ms)

                                                
                                                
-- stdout --
	* Profile "addons-171954" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-171954"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-171954
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-171954: exit status 85 (66.222071ms)

                                                
                                                
-- stdout --
	* Profile "addons-171954" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-171954"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (206.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-171954 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-171954 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m26.144017154s)
--- PASS: TestAddons/Setup (206.14s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.53s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 22.673827ms
addons_test.go:868: volcano-scheduler stabilized in 22.706803ms
addons_test.go:876: volcano-admission stabilized in 23.177286ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-4qv5s" [e780cf24-5407-44de-9f32-4007f955023d] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005862143s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-k6zkb" [357320a0-3e52-4137-8220-6a16f09d04fa] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00675462s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-njv27" [8a8cb5b2-9d1c-4ace-b99c-7ee198e4017f] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003979659s
addons_test.go:903: (dbg) Run:  kubectl --context addons-171954 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-171954 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-171954 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [02c6d02d-f1a6-4242-8f08-a23eb70234ae] Pending
helpers_test.go:352: "test-job-nginx-0" [02c6d02d-f1a6-4242-8f08-a23eb70234ae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [02c6d02d-f1a6-4242-8f08-a23eb70234ae] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.005186163s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-171954 addons disable volcano --alsologtostderr -v=1: (12.017537805s)
--- PASS: TestAddons/serial/Volcano (43.53s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-171954 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-171954 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.57s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-171954 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-171954 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5dbcfaaa-5638-4139-91c8-86c6f46861e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5dbcfaaa-5638-4139-91c8-86c6f46861e5] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.005123662s
addons_test.go:694: (dbg) Run:  kubectl --context addons-171954 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-171954 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-171954 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.57s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.362688ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-9bkw7" [243856d4-1236-40bb-861f-009deae9b590] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.073319485s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-mq64f" [2bb47043-faae-42c9-bc07-488e91a1a3c8] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.147801737s
addons_test.go:392: (dbg) Run:  kubectl --context addons-171954 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-171954 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-171954 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.419403776s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 ip
2025/11/01 09:41:17 [DEBUG] GET http://192.168.39.221:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.51s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 13.480264ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-171954
addons_test.go:332: (dbg) Run:  kubectl --context addons-171954 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-171954 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-171954 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-171954 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [e99b0b2f-dd5c-41f0-8d58-1b2162539262] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [e99b0b2f-dd5c-41f0-8d58-1b2162539262] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004838932s
I1101 09:41:11.174902  468355 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-171954 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.221
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-171954 addons disable ingress-dns --alsologtostderr -v=1: (1.490346199s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-171954 addons disable ingress --alsologtostderr -v=1: (7.930187093s)
--- PASS: TestAddons/parallel/Ingress (22.22s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.19s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-8jl2d" [46a20318-1e11-4537-9fac-37a0240e1421] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005651232s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.99s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.578357ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-mtpch" [199cb650-4ad7-4124-a7b0-d5f45cafb213] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.07390231s
addons_test.go:463: (dbg) Run:  kubectl --context addons-171954 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 09:41:24.008349  468355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 09:41:24.015687  468355 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 09:41:24.015717  468355 kapi.go:107] duration metric: took 7.392907ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.404161ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-171954 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-171954 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [0ef71b03-4def-494d-b406-fb8f6e1d686a] Pending
helpers_test.go:352: "task-pv-pod" [0ef71b03-4def-494d-b406-fb8f6e1d686a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [0ef71b03-4def-494d-b406-fb8f6e1d686a] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004591274s
addons_test.go:572: (dbg) Run:  kubectl --context addons-171954 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-171954 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-171954 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-171954 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-171954 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-171954 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-171954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-171954 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [401fb9a2-cd93-4c8a-844d-ae983f0958f3] Pending
helpers_test.go:352: "task-pv-pod-restore" [401fb9a2-cd93-4c8a-844d-ae983f0958f3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [401fb9a2-cd93-4c8a-844d-ae983f0958f3] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.060445208s
addons_test.go:614: (dbg) Run:  kubectl --context addons-171954 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-171954 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-171954 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-171954 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.927003294s)
--- PASS: TestAddons/parallel/CSI (55.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (24.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-171954 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-171954 --alsologtostderr -v=1: (1.242041676s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-tbjk2" [15aa7d2c-14cd-457c-8899-b54ee53155c5] Pending
helpers_test.go:352: "headlamp-6945c6f4d-tbjk2" [15aa7d2c-14cd-457c-8899-b54ee53155c5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-tbjk2" [15aa7d2c-14cd-457c-8899-b54ee53155c5] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.006225007s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-171954 addons disable headlamp --alsologtostderr -v=1: (6.163665312s)
--- PASS: TestAddons/parallel/Headlamp (24.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-mk9hw" [30da5ea2-5957-4e0d-8e2e-ec19811a362c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005020606s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-h99d4" [cb2516a7-94c8-4a1d-ac21-4d4c99fa0089] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.022064537s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-gxhpg" [a911aa2a-3a9c-4b4f-a541-329f67d3a830] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.013952024s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-171954 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-171954 addons disable yakd --alsologtostderr -v=1: (5.819115297s)
--- PASS: TestAddons/parallel/Yakd (10.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.87s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-171954
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-171954: (13.655541697s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-171954
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-171954
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-171954
--- PASS: TestAddons/StoppedEnableDisable (13.87s)

                                                
                                    
x
+
TestCertOptions (84.54s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-058116 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E1101 10:34:56.336439  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-058116 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m23.095273219s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-058116 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-058116 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-058116 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-058116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-058116
--- PASS: TestCertOptions (84.54s)

                                                
                                    
x
+
TestCertExpiration (345.72s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-386766 --memory=3072 --cert-expiration=3m --driver=kvm2 
E1101 10:34:24.638543  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-386766 --memory=3072 --cert-expiration=3m --driver=kvm2 : (1m43.036090736s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-386766 --memory=3072 --cert-expiration=8760h --driver=kvm2 
E1101 10:38:56.830899  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:38:56.837345  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:38:56.848837  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:38:56.870359  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:38:56.911909  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:38:56.993391  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:38:57.154969  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-386766 --memory=3072 --cert-expiration=8760h --driver=kvm2 : (1m1.673010544s)
helpers_test.go:175: Cleaning up "cert-expiration-386766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-386766
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-386766: (1.010568544s)
--- PASS: TestCertExpiration (345.72s)

                                                
                                    
x
+
TestDockerFlags (45.5s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-489250 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-489250 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (44.004961156s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-489250 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-489250 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-489250" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-489250
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-489250: (1.059795913s)
--- PASS: TestDockerFlags (45.50s)

                                                
                                    
x
+
TestForceSystemdFlag (90.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-245430 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-245430 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m29.474732497s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-245430 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-245430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-245430
--- PASS: TestForceSystemdFlag (90.58s)

                                                
                                    
x
+
TestForceSystemdEnv (58.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-723858 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-723858 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (56.989161807s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-723858 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-723858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-723858
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-723858: (1.229024432s)
--- PASS: TestForceSystemdEnv (58.45s)

                                                
                                    
x
+
TestErrorSpam/setup (41.1s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-364835 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-364835 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-364835 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-364835 --driver=kvm2 : (41.103593092s)
--- PASS: TestErrorSpam/setup (41.10s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 pause
--- PASS: TestErrorSpam/pause (1.27s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (5.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 stop: (2.700920447s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 stop: (1.476405886s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-364835 --log_dir /tmp/nospam-364835 stop: (1.355575996s)
--- PASS: TestErrorSpam/stop (5.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21830-464466/.minikube/files/etc/test/nested/copy/468355/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-498549 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-498549 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m2.640445907s)
--- PASS: TestFunctional/serial/StartWithProxy (62.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (58.88s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 09:47:16.706391  468355 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-498549 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-498549 --alsologtostderr -v=8: (58.883131447s)
functional_test.go:678: soft start took 58.88387032s for "functional-498549" cluster.
I1101 09:48:15.590244  468355 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (58.88s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-498549 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-498549 cache add registry.k8s.io/pause:latest: (1.12814868s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-498549 /tmp/TestFunctionalserialCacheCmdcacheadd_local2333431328/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 cache add minikube-local-cache-test:functional-498549
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-498549 cache add minikube-local-cache-test:functional-498549: (1.051319524s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 cache delete minikube-local-cache-test:functional-498549
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-498549
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-498549 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (184.188417ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 kubectl -- --context functional-498549 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-498549 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (55.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-498549 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-498549 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (55.618216543s)
functional_test.go:776: restart took 55.618373016s for "functional-498549" cluster.
I1101 09:49:17.407312  468355 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (55.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-498549 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 logs
--- PASS: TestFunctional/serial/LogsCmd (0.97s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 logs --file /tmp/TestFunctionalserialLogsFileCmd2803246039/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.57s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-498549 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-498549
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-498549: exit status 115 (235.653298ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.190:31259 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-498549 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-498549 delete -f testdata/invalidsvc.yaml: (1.13528482s)
--- PASS: TestFunctional/serial/InvalidService (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-498549 config get cpus: exit status 14 (67.170255ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-498549 config get cpus: exit status 14 (67.924594ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-498549 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-498549 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (117.257001ms)

                                                
                                                
-- stdout --
	* [functional-498549] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:49:36.180229  474783 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:49:36.180498  474783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:36.180510  474783 out.go:374] Setting ErrFile to fd 2...
	I1101 09:49:36.180515  474783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:36.180728  474783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
	I1101 09:49:36.181219  474783 out.go:368] Setting JSON to false
	I1101 09:49:36.182219  474783 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5515,"bootTime":1761985061,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:49:36.182323  474783 start.go:143] virtualization: kvm guest
	I1101 09:49:36.183957  474783 out.go:179] * [functional-498549] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:49:36.185301  474783 notify.go:221] Checking for updates...
	I1101 09:49:36.185364  474783 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 09:49:36.186492  474783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:49:36.187573  474783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig
	I1101 09:49:36.188703  474783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube
	I1101 09:49:36.189897  474783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:49:36.191200  474783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:49:36.192731  474783 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1101 09:49:36.193211  474783 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:49:36.226561  474783 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 09:49:36.227663  474783 start.go:309] selected driver: kvm2
	I1101 09:49:36.227682  474783 start.go:930] validating driver "kvm2" against &{Name:functional-498549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-498549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:49:36.227848  474783 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:49:36.230076  474783 out.go:203] 
	W1101 09:49:36.231270  474783 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 09:49:36.232416  474783 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-498549 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-498549 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-498549 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (116.770536ms)

                                                
                                                
-- stdout --
	* [functional-498549] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:49:31.080289  474495 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:49:31.080526  474495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:31.080534  474495 out.go:374] Setting ErrFile to fd 2...
	I1101 09:49:31.080538  474495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:31.080844  474495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
	I1101 09:49:31.081282  474495 out.go:368] Setting JSON to false
	I1101 09:49:31.082194  474495 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5510,"bootTime":1761985061,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:49:31.082281  474495 start.go:143] virtualization: kvm guest
	I1101 09:49:31.084191  474495 out.go:179] * [functional-498549] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 09:49:31.085406  474495 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 09:49:31.085437  474495 notify.go:221] Checking for updates...
	I1101 09:49:31.087660  474495 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:49:31.088921  474495 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig
	I1101 09:49:31.090258  474495 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube
	I1101 09:49:31.091472  474495 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:49:31.092752  474495 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:49:31.094346  474495 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1101 09:49:31.094800  474495 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:49:31.127413  474495 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1101 09:49:31.128627  474495 start.go:309] selected driver: kvm2
	I1101 09:49:31.128641  474495 start.go:930] validating driver "kvm2" against &{Name:functional-498549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-498549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:49:31.128734  474495 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:49:31.130743  474495 out.go:203] 
	W1101 09:49:31.132165  474495 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 09:49:31.133404  474495 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-498549 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-498549 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-m4bk7" [8c382099-fa82-42b7-94c0-e432d74a9be3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-m4bk7" [8c382099-fa82-42b7-94c0-e432d74a9be3] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.00410248s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.190:31330
functional_test.go:1680: http://192.168.39.190:31330: success! body:
Request served by hello-node-connect-7d85dfc575-m4bk7

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.190:31330
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.41s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh -n functional-498549 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 cp functional-498549:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3581848362/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh -n functional-498549 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh -n functional-498549 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/468355/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "sudo cat /etc/test/nested/copy/468355/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/468355.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "sudo cat /etc/ssl/certs/468355.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/468355.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "sudo cat /usr/share/ca-certificates/468355.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4683552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "sudo cat /etc/ssl/certs/4683552.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4683552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "sudo cat /usr/share/ca-certificates/4683552.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-498549 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-498549 ssh "sudo systemctl is-active crio": exit status 1 (192.747377ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-498549 docker-env) && out/minikube-linux-amd64 status -p functional-498549"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-498549 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-498549 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-498549
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-498549
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-498549 image ls --format short --alsologtostderr:
I1101 09:49:43.034612  475148 out.go:360] Setting OutFile to fd 1 ...
I1101 09:49:43.034914  475148 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:43.034925  475148 out.go:374] Setting ErrFile to fd 2...
I1101 09:49:43.034929  475148 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:43.035151  475148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
I1101 09:49:43.035794  475148 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1101 09:49:43.035922  475148 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1101 09:49:43.038183  475148 ssh_runner.go:195] Run: systemctl --version
I1101 09:49:43.040530  475148 main.go:143] libmachine: domain functional-498549 has defined MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:43.040953  475148 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:14:35:97", ip: ""} in network mk-functional-498549: {Iface:virbr1 ExpiryTime:2025-11-01 10:46:29 +0000 UTC Type:0 Mac:52:54:00:14:35:97 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-498549 Clientid:01:52:54:00:14:35:97}
I1101 09:49:43.040980  475148 main.go:143] libmachine: domain functional-498549 has defined IP address 192.168.39.190 and MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:43.041129  475148 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/functional-498549/id_rsa Username:docker}
I1101 09:49:43.117551  475148 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-498549 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-498549 │ f5dc4584ce1a5 │ 30B    │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1           │ c80c8dbafe7dd │ 74.9MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1           │ c3994bc696102 │ 88MB   │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ localhost/my-image                          │ functional-498549 │ 623235c1ae63e │ 1.24MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1           │ 7dd6aaa1717ab │ 52.8MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ docker.io/kicbase/echo-server               │ functional-498549 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1           │ fc25172553d79 │ 71.9MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-498549 image ls --format table --alsologtostderr:
I1101 09:49:47.138615  475230 out.go:360] Setting OutFile to fd 1 ...
I1101 09:49:47.138898  475230 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:47.138909  475230 out.go:374] Setting ErrFile to fd 2...
I1101 09:49:47.138913  475230 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:47.139131  475230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
I1101 09:49:47.139684  475230 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1101 09:49:47.139774  475230 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1101 09:49:47.141757  475230 ssh_runner.go:195] Run: systemctl --version
I1101 09:49:47.143733  475230 main.go:143] libmachine: domain functional-498549 has defined MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:47.144121  475230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:14:35:97", ip: ""} in network mk-functional-498549: {Iface:virbr1 ExpiryTime:2025-11-01 10:46:29 +0000 UTC Type:0 Mac:52:54:00:14:35:97 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-498549 Clientid:01:52:54:00:14:35:97}
I1101 09:49:47.144143  475230 main.go:143] libmachine: domain functional-498549 has defined IP address 192.168.39.190 and MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:47.144268  475230 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/functional-498549/id_rsa Username:docker}
I1101 09:49:47.227555  475230 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-498549 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"88000000"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"52800000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-498549","docker.io/kicbase/echo-server:latest"],"size":"4940000"},
{"id":"f5dc4584ce1a59cb082176665a28724eba2fcdc1276eca5ab178705e07aa1bf9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-498549"],"size":"30"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"74900000"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"71900000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"623235c1ae63e33d4c6a903d65bc2b725962a68525c993912eec05f26fb9c0f5","repoDigests":[],"repoTags":["localhost/my-image:functional-498549"],"size":"1240000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec8
4a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-498549 image ls --format json --alsologtostderr:
I1101 09:49:46.962669  475219 out.go:360] Setting OutFile to fd 1 ...
I1101 09:49:46.962981  475219 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:46.963070  475219 out.go:374] Setting ErrFile to fd 2...
I1101 09:49:46.963096  475219 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:46.963407  475219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
I1101 09:49:46.964148  475219 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1101 09:49:46.964275  475219 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1101 09:49:46.966414  475219 ssh_runner.go:195] Run: systemctl --version
I1101 09:49:46.968489  475219 main.go:143] libmachine: domain functional-498549 has defined MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:46.968901  475219 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:14:35:97", ip: ""} in network mk-functional-498549: {Iface:virbr1 ExpiryTime:2025-11-01 10:46:29 +0000 UTC Type:0 Mac:52:54:00:14:35:97 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-498549 Clientid:01:52:54:00:14:35:97}
I1101 09:49:46.968932  475219 main.go:143] libmachine: domain functional-498549 has defined IP address 192.168.39.190 and MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:46.969092  475219 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/functional-498549/id_rsa Username:docker}
I1101 09:49:47.047606  475219 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-498549 image ls --format yaml --alsologtostderr:
- id: f5dc4584ce1a59cb082176665a28724eba2fcdc1276eca5ab178705e07aa1bf9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-498549
size: "30"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "88000000"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "52800000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "74900000"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "71900000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-498549
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-498549 image ls --format yaml --alsologtostderr:
I1101 09:49:43.207903  475159 out.go:360] Setting OutFile to fd 1 ...
I1101 09:49:43.208197  475159 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:43.208208  475159 out.go:374] Setting ErrFile to fd 2...
I1101 09:49:43.208212  475159 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:43.208435  475159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
I1101 09:49:43.208983  475159 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1101 09:49:43.209078  475159 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1101 09:49:43.211150  475159 ssh_runner.go:195] Run: systemctl --version
I1101 09:49:43.213360  475159 main.go:143] libmachine: domain functional-498549 has defined MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:43.213852  475159 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:14:35:97", ip: ""} in network mk-functional-498549: {Iface:virbr1 ExpiryTime:2025-11-01 10:46:29 +0000 UTC Type:0 Mac:52:54:00:14:35:97 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-498549 Clientid:01:52:54:00:14:35:97}
I1101 09:49:43.213883  475159 main.go:143] libmachine: domain functional-498549 has defined IP address 192.168.39.190 and MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:43.214091  475159 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/functional-498549/id_rsa Username:docker}
I1101 09:49:43.308016  475159 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-498549 ssh pgrep buildkitd: exit status 1 (152.888921ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image build -t localhost/my-image:functional-498549 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-498549 image build -t localhost/my-image:functional-498549 testdata/build --alsologtostderr: (3.220599133s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-498549 image build -t localhost/my-image:functional-498549 testdata/build --alsologtostderr:
I1101 09:49:43.555088  475181 out.go:360] Setting OutFile to fd 1 ...
I1101 09:49:43.555355  475181 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:43.555366  475181 out.go:374] Setting ErrFile to fd 2...
I1101 09:49:43.555371  475181 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:49:43.555595  475181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
I1101 09:49:43.556198  475181 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1101 09:49:43.556885  475181 config.go:182] Loaded profile config "functional-498549": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1101 09:49:43.558692  475181 ssh_runner.go:195] Run: systemctl --version
I1101 09:49:43.560519  475181 main.go:143] libmachine: domain functional-498549 has defined MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:43.561053  475181 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:14:35:97", ip: ""} in network mk-functional-498549: {Iface:virbr1 ExpiryTime:2025-11-01 10:46:29 +0000 UTC Type:0 Mac:52:54:00:14:35:97 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-498549 Clientid:01:52:54:00:14:35:97}
I1101 09:49:43.561083  475181 main.go:143] libmachine: domain functional-498549 has defined IP address 192.168.39.190 and MAC address 52:54:00:14:35:97 in network mk-functional-498549
I1101 09:49:43.561269  475181 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/functional-498549/id_rsa Username:docker}
I1101 09:49:43.638590  475181 build_images.go:162] Building image from path: /tmp/build.923169381.tar
I1101 09:49:43.638680  475181 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 09:49:43.655495  475181 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.923169381.tar
I1101 09:49:43.660785  475181 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.923169381.tar: stat -c "%s %y" /var/lib/minikube/build/build.923169381.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.923169381.tar': No such file or directory
I1101 09:49:43.660843  475181 ssh_runner.go:362] scp /tmp/build.923169381.tar --> /var/lib/minikube/build/build.923169381.tar (3072 bytes)
I1101 09:49:43.691606  475181 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.923169381
I1101 09:49:43.703878  475181 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.923169381 -xf /var/lib/minikube/build/build.923169381.tar
I1101 09:49:43.715600  475181 docker.go:361] Building image: /var/lib/minikube/build/build.923169381
I1101 09:49:43.715701  475181 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-498549 /var/lib/minikube/build/build.923169381
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:623235c1ae63e33d4c6a903d65bc2b725962a68525c993912eec05f26fb9c0f5
#8 writing image sha256:623235c1ae63e33d4c6a903d65bc2b725962a68525c993912eec05f26fb9c0f5 done
#8 naming to localhost/my-image:functional-498549 done
#8 DONE 0.1s
I1101 09:49:46.678248  475181 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-498549 /var/lib/minikube/build/build.923169381: (2.962512153s)
I1101 09:49:46.678347  475181 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.923169381
I1101 09:49:46.695465  475181 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.923169381.tar
I1101 09:49:46.711508  475181 build_images.go:218] Built localhost/my-image:functional-498549 from /tmp/build.923169381.tar
I1101 09:49:46.711559  475181 build_images.go:134] succeeded building to: functional-498549
I1101 09:49:46.711565  475181 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.735978104s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-498549
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "270.906998ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "65.548675ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "256.439091ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "71.187927ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-498549 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-498549 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-7x2w6" [47bd1fab-3a8e-4add-9e60-8a605abe8ab7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-7x2w6" [47bd1fab-3a8e-4add-9e60-8a605abe8ab7] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004139263s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image load --daemon kicbase/echo-server:functional-498549 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image load --daemon kicbase/echo-server:functional-498549 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-498549
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image load --daemon kicbase/echo-server:functional-498549 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image save kicbase/echo-server:functional-498549 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image rm kicbase/echo-server:functional-498549 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-498549
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 image save --daemon kicbase/echo-server:functional-498549 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-498549
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-498549 /tmp/TestFunctionalparallelMountCmdany-port365256132/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761990571140698466" to /tmp/TestFunctionalparallelMountCmdany-port365256132/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761990571140698466" to /tmp/TestFunctionalparallelMountCmdany-port365256132/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761990571140698466" to /tmp/TestFunctionalparallelMountCmdany-port365256132/001/test-1761990571140698466
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-498549 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (154.096702ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:49:31.295232  468355 retry.go:31] will retry after 355.99496ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 09:49 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 09:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 09:49 test-1761990571140698466
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh cat /mount-9p/test-1761990571140698466
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-498549 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [ed019c4f-c8aa-4b49-83fc-7f5a2f55dcf1] Pending
helpers_test.go:352: "busybox-mount" [ed019c4f-c8aa-4b49-83fc-7f5a2f55dcf1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [ed019c4f-c8aa-4b49-83fc-7f5a2f55dcf1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [ed019c4f-c8aa-4b49-83fc-7f5a2f55dcf1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.008106688s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-498549 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-498549 /tmp/TestFunctionalparallelMountCmdany-port365256132/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 service list -o json
functional_test.go:1504: Took "323.38474ms" to run "out/minikube-linux-amd64 -p functional-498549 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.190:32164
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 update-context --alsologtostderr -v=2
E1101 09:49:56.336005  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:49:56.342484  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:49:56.353985  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:49:56.375481  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:49:56.417065  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:49:56.498682  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:49:56.660454  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:49:56.982203  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:49:57.624526  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:49:58.906607  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:50:01.468839  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:50:06.591056  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:50:16.833274  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:50:37.315060  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:51:18.277482  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:40.199859  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.190:32164
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-498549 /tmp/TestFunctionalparallelMountCmdspecific-port2763790135/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-498549 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (157.703718ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:49:39.140163  468355 retry.go:31] will retry after 566.596609ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-498549 /tmp/TestFunctionalparallelMountCmdspecific-port2763790135/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-498549 ssh "sudo umount -f /mount-9p": exit status 1 (149.027291ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-498549 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-498549 /tmp/TestFunctionalparallelMountCmdspecific-port2763790135/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-498549 ssh "findmnt -T" /mount1: exit status 1 (232.312169ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:49:40.629868  468355 retry.go:31] will retry after 377.029239ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-498549 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-498549 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-498549 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2324950463/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.13s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-498549
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-498549
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-498549
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (135.55s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-392081 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-392081 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (53.909187058s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-392081 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-392081 cache add gcr.io/k8s-minikube/gvisor-addon:2: (4.775187029s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-392081 addons enable gvisor
E1101 10:38:57.476982  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:38:58.119094  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:38:59.401022  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-392081 addons enable gvisor: (4.638821089s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:352: "gvisor" [3d07b9d3-dd05-4133-96d4-141848f61f7c] Running
E1101 10:39:01.963283  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:39:07.085637  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005004497s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-392081 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:352: "nginx-gvisor" [10c35843-34a9-4b54-aa3c-5aafcbf80002] Pending
helpers_test.go:352: "nginx-gvisor" [10c35843-34a9-4b54-aa3c-5aafcbf80002] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-gvisor" [10c35843-34a9-4b54-aa3c-5aafcbf80002] Running
E1101 10:39:17.327949  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 14.007021392s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-392081
E1101 10:39:24.637088  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-392081: (7.185129182s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-392081 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-392081 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (32.917151797s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:352: "gvisor" [3d07b9d3-dd05-4133-96d4-141848f61f7c] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:352: "gvisor" [3d07b9d3-dd05-4133-96d4-141848f61f7c] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.00458271s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:352: "nginx-gvisor" [10c35843-34a9-4b54-aa3c-5aafcbf80002] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.006090379s
helpers_test.go:175: Cleaning up "gvisor-392081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-392081
--- PASS: TestGvisorAddon (135.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (241.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 
E1101 09:59:56.336442  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-107817 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 : (4m1.003156974s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (241.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-107817 kubectl -- rollout status deployment/busybox: (3.85314017s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-54klc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-dhzvh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-gb8gj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-54klc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-dhzvh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-gb8gj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-54klc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-dhzvh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-gb8gj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-54klc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-54klc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-dhzvh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-dhzvh -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-gb8gj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 kubectl -- exec busybox-7b57f96db7-gb8gj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (70.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 node add --alsologtostderr -v 5
E1101 10:04:24.637686  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:24.644157  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:24.655659  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:24.677208  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:24.718735  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:24.800318  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:24.961943  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:25.284213  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:25.926329  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:27.208523  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:29.771594  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:34.893605  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:45.135912  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:56.335579  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-107817 node add --alsologtostderr -v 5: (1m10.151929931s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (70.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-107817 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp testdata/cp-test.txt ha-107817:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2541717499/001/cp-test_ha-107817.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817:/home/docker/cp-test.txt ha-107817-m02:/home/docker/cp-test_ha-107817_ha-107817-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m02 "sudo cat /home/docker/cp-test_ha-107817_ha-107817-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817:/home/docker/cp-test.txt ha-107817-m03:/home/docker/cp-test_ha-107817_ha-107817-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m03 "sudo cat /home/docker/cp-test_ha-107817_ha-107817-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817:/home/docker/cp-test.txt ha-107817-m04:/home/docker/cp-test_ha-107817_ha-107817-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m04 "sudo cat /home/docker/cp-test_ha-107817_ha-107817-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp testdata/cp-test.txt ha-107817-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2541717499/001/cp-test_ha-107817-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817-m02:/home/docker/cp-test.txt ha-107817:/home/docker/cp-test_ha-107817-m02_ha-107817.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817 "sudo cat /home/docker/cp-test_ha-107817-m02_ha-107817.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817-m02:/home/docker/cp-test.txt ha-107817-m03:/home/docker/cp-test_ha-107817-m02_ha-107817-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m02 "sudo cat /home/docker/cp-test.txt"
E1101 10:05:05.617570  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m03 "sudo cat /home/docker/cp-test_ha-107817-m02_ha-107817-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817-m02:/home/docker/cp-test.txt ha-107817-m04:/home/docker/cp-test_ha-107817-m02_ha-107817-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m04 "sudo cat /home/docker/cp-test_ha-107817-m02_ha-107817-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp testdata/cp-test.txt ha-107817-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2541717499/001/cp-test_ha-107817-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817-m03:/home/docker/cp-test.txt ha-107817:/home/docker/cp-test_ha-107817-m03_ha-107817.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817 "sudo cat /home/docker/cp-test_ha-107817-m03_ha-107817.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817-m03:/home/docker/cp-test.txt ha-107817-m02:/home/docker/cp-test_ha-107817-m03_ha-107817-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m02 "sudo cat /home/docker/cp-test_ha-107817-m03_ha-107817-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817-m03:/home/docker/cp-test.txt ha-107817-m04:/home/docker/cp-test_ha-107817-m03_ha-107817-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m04 "sudo cat /home/docker/cp-test_ha-107817-m03_ha-107817-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp testdata/cp-test.txt ha-107817-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2541717499/001/cp-test_ha-107817-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817-m04:/home/docker/cp-test.txt ha-107817:/home/docker/cp-test_ha-107817-m04_ha-107817.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817 "sudo cat /home/docker/cp-test_ha-107817-m04_ha-107817.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817-m04:/home/docker/cp-test.txt ha-107817-m02:/home/docker/cp-test_ha-107817-m04_ha-107817-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m02 "sudo cat /home/docker/cp-test_ha-107817-m04_ha-107817-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 cp ha-107817-m04:/home/docker/cp-test.txt ha-107817-m03:/home/docker/cp-test_ha-107817-m04_ha-107817-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 ssh -n ha-107817-m03 "sudo cat /home/docker/cp-test_ha-107817-m04_ha-107817-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-107817 node stop m02 --alsologtostderr -v 5: (13.990534146s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-107817 status --alsologtostderr -v 5: exit status 7 (506.114554ms)

                                                
                                                
-- stdout --
	ha-107817
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-107817-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-107817-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-107817-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:05:25.798360  480417 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:05:25.798720  480417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:05:25.798730  480417 out.go:374] Setting ErrFile to fd 2...
	I1101 10:05:25.798734  480417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:05:25.798999  480417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
	I1101 10:05:25.799184  480417 out.go:368] Setting JSON to false
	I1101 10:05:25.799211  480417 mustload.go:66] Loading cluster: ha-107817
	I1101 10:05:25.799414  480417 notify.go:221] Checking for updates...
	I1101 10:05:25.799624  480417 config.go:182] Loaded profile config "ha-107817": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1101 10:05:25.799642  480417 status.go:174] checking status of ha-107817 ...
	I1101 10:05:25.801854  480417 status.go:371] ha-107817 host status = "Running" (err=<nil>)
	I1101 10:05:25.801884  480417 host.go:66] Checking if "ha-107817" exists ...
	I1101 10:05:25.805139  480417 main.go:143] libmachine: domain ha-107817 has defined MAC address 52:54:00:9d:af:86 in network mk-ha-107817
	I1101 10:05:25.805691  480417 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9d:af:86", ip: ""} in network mk-ha-107817: {Iface:virbr1 ExpiryTime:2025-11-01 10:59:54 +0000 UTC Type:0 Mac:52:54:00:9d:af:86 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-107817 Clientid:01:52:54:00:9d:af:86}
	I1101 10:05:25.805729  480417 main.go:143] libmachine: domain ha-107817 has defined IP address 192.168.39.172 and MAC address 52:54:00:9d:af:86 in network mk-ha-107817
	I1101 10:05:25.805958  480417 host.go:66] Checking if "ha-107817" exists ...
	I1101 10:05:25.806192  480417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:05:25.808338  480417 main.go:143] libmachine: domain ha-107817 has defined MAC address 52:54:00:9d:af:86 in network mk-ha-107817
	I1101 10:05:25.808732  480417 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9d:af:86", ip: ""} in network mk-ha-107817: {Iface:virbr1 ExpiryTime:2025-11-01 10:59:54 +0000 UTC Type:0 Mac:52:54:00:9d:af:86 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-107817 Clientid:01:52:54:00:9d:af:86}
	I1101 10:05:25.808755  480417 main.go:143] libmachine: domain ha-107817 has defined IP address 192.168.39.172 and MAC address 52:54:00:9d:af:86 in network mk-ha-107817
	I1101 10:05:25.808918  480417 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/ha-107817/id_rsa Username:docker}
	I1101 10:05:25.896028  480417 ssh_runner.go:195] Run: systemctl --version
	I1101 10:05:25.902335  480417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:05:25.921304  480417 kubeconfig.go:125] found "ha-107817" server: "https://192.168.39.254:8443"
	I1101 10:05:25.921352  480417 api_server.go:166] Checking apiserver status ...
	I1101 10:05:25.921389  480417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:05:25.945228  480417 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2374/cgroup
	W1101 10:05:25.957299  480417 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:05:25.957353  480417 ssh_runner.go:195] Run: ls
	I1101 10:05:25.962840  480417 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1101 10:05:25.968269  480417 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1101 10:05:25.968296  480417 status.go:463] ha-107817 apiserver status = Running (err=<nil>)
	I1101 10:05:25.968306  480417 status.go:176] ha-107817 status: &{Name:ha-107817 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:05:25.968344  480417 status.go:174] checking status of ha-107817-m02 ...
	I1101 10:05:25.970264  480417 status.go:371] ha-107817-m02 host status = "Stopped" (err=<nil>)
	I1101 10:05:25.970291  480417 status.go:384] host is not running, skipping remaining checks
	I1101 10:05:25.970299  480417 status.go:176] ha-107817-m02 status: &{Name:ha-107817-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:05:25.970328  480417 status.go:174] checking status of ha-107817-m03 ...
	I1101 10:05:25.971785  480417 status.go:371] ha-107817-m03 host status = "Running" (err=<nil>)
	I1101 10:05:25.971821  480417 host.go:66] Checking if "ha-107817-m03" exists ...
	I1101 10:05:25.974342  480417 main.go:143] libmachine: domain ha-107817-m03 has defined MAC address 52:54:00:71:30:43 in network mk-ha-107817
	I1101 10:05:25.974739  480417 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:30:43", ip: ""} in network mk-ha-107817: {Iface:virbr1 ExpiryTime:2025-11-01 11:02:11 +0000 UTC Type:0 Mac:52:54:00:71:30:43 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-107817-m03 Clientid:01:52:54:00:71:30:43}
	I1101 10:05:25.974781  480417 main.go:143] libmachine: domain ha-107817-m03 has defined IP address 192.168.39.152 and MAC address 52:54:00:71:30:43 in network mk-ha-107817
	I1101 10:05:25.974967  480417 host.go:66] Checking if "ha-107817-m03" exists ...
	I1101 10:05:25.975178  480417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:05:25.977654  480417 main.go:143] libmachine: domain ha-107817-m03 has defined MAC address 52:54:00:71:30:43 in network mk-ha-107817
	I1101 10:05:25.978170  480417 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:30:43", ip: ""} in network mk-ha-107817: {Iface:virbr1 ExpiryTime:2025-11-01 11:02:11 +0000 UTC Type:0 Mac:52:54:00:71:30:43 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-107817-m03 Clientid:01:52:54:00:71:30:43}
	I1101 10:05:25.978215  480417 main.go:143] libmachine: domain ha-107817-m03 has defined IP address 192.168.39.152 and MAC address 52:54:00:71:30:43 in network mk-ha-107817
	I1101 10:05:25.978402  480417 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/ha-107817-m03/id_rsa Username:docker}
	I1101 10:05:26.061221  480417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:05:26.081721  480417 kubeconfig.go:125] found "ha-107817" server: "https://192.168.39.254:8443"
	I1101 10:05:26.081763  480417 api_server.go:166] Checking apiserver status ...
	I1101 10:05:26.081826  480417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:05:26.105269  480417 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2355/cgroup
	W1101 10:05:26.117841  480417 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2355/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:05:26.117973  480417 ssh_runner.go:195] Run: ls
	I1101 10:05:26.123238  480417 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1101 10:05:26.128604  480417 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1101 10:05:26.128638  480417 status.go:463] ha-107817-m03 apiserver status = Running (err=<nil>)
	I1101 10:05:26.128651  480417 status.go:176] ha-107817-m03 status: &{Name:ha-107817-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:05:26.128669  480417 status.go:174] checking status of ha-107817-m04 ...
	I1101 10:05:26.130671  480417 status.go:371] ha-107817-m04 host status = "Running" (err=<nil>)
	I1101 10:05:26.130702  480417 host.go:66] Checking if "ha-107817-m04" exists ...
	I1101 10:05:26.133405  480417 main.go:143] libmachine: domain ha-107817-m04 has defined MAC address 52:54:00:99:46:1c in network mk-ha-107817
	I1101 10:05:26.133875  480417 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:99:46:1c", ip: ""} in network mk-ha-107817: {Iface:virbr1 ExpiryTime:2025-11-01 11:04:05 +0000 UTC Type:0 Mac:52:54:00:99:46:1c Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-107817-m04 Clientid:01:52:54:00:99:46:1c}
	I1101 10:05:26.133909  480417 main.go:143] libmachine: domain ha-107817-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:99:46:1c in network mk-ha-107817
	I1101 10:05:26.134075  480417 host.go:66] Checking if "ha-107817-m04" exists ...
	I1101 10:05:26.134311  480417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:05:26.136542  480417 main.go:143] libmachine: domain ha-107817-m04 has defined MAC address 52:54:00:99:46:1c in network mk-ha-107817
	I1101 10:05:26.136952  480417 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:99:46:1c", ip: ""} in network mk-ha-107817: {Iface:virbr1 ExpiryTime:2025-11-01 11:04:05 +0000 UTC Type:0 Mac:52:54:00:99:46:1c Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-107817-m04 Clientid:01:52:54:00:99:46:1c}
	I1101 10:05:26.136978  480417 main.go:143] libmachine: domain ha-107817-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:99:46:1c in network mk-ha-107817
	I1101 10:05:26.137151  480417 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/ha-107817-m04/id_rsa Username:docker}
	I1101 10:05:26.219461  480417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:05:26.239657  480417 status.go:176] ha-107817-m04 status: &{Name:ha-107817-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 node start m02 --alsologtostderr -v 5
E1101 10:05:46.580101  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-107817 node start m02 --alsologtostderr -v 5: (24.688051226s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (173.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 stop --alsologtostderr -v 5
E1101 10:06:19.404299  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-107817 stop --alsologtostderr -v 5: (40.863249951s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 start --wait true --alsologtostderr -v 5
E1101 10:07:08.502013  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-107817 start --wait true --alsologtostderr -v 5: (2m12.572436452s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (173.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-107817 node delete m03 --alsologtostderr -v 5: (6.590212699s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (39.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 stop --alsologtostderr -v 5
E1101 10:09:24.637799  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-107817 stop --alsologtostderr -v 5: (39.643722998s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-107817 status --alsologtostderr -v 5: exit status 7 (69.683208ms)

                                                
                                                
-- stdout --
	ha-107817
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-107817-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-107817-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:09:34.347095  481985 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:09:34.347378  481985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:09:34.347389  481985 out.go:374] Setting ErrFile to fd 2...
	I1101 10:09:34.347396  481985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:09:34.347597  481985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
	I1101 10:09:34.347813  481985 out.go:368] Setting JSON to false
	I1101 10:09:34.347848  481985 mustload.go:66] Loading cluster: ha-107817
	I1101 10:09:34.347961  481985 notify.go:221] Checking for updates...
	I1101 10:09:34.348351  481985 config.go:182] Loaded profile config "ha-107817": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1101 10:09:34.348370  481985 status.go:174] checking status of ha-107817 ...
	I1101 10:09:34.350285  481985 status.go:371] ha-107817 host status = "Stopped" (err=<nil>)
	I1101 10:09:34.350302  481985 status.go:384] host is not running, skipping remaining checks
	I1101 10:09:34.350309  481985 status.go:176] ha-107817 status: &{Name:ha-107817 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:09:34.350334  481985 status.go:174] checking status of ha-107817-m02 ...
	I1101 10:09:34.351689  481985 status.go:371] ha-107817-m02 host status = "Stopped" (err=<nil>)
	I1101 10:09:34.351707  481985 status.go:384] host is not running, skipping remaining checks
	I1101 10:09:34.351714  481985 status.go:176] ha-107817-m02 status: &{Name:ha-107817-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:09:34.351733  481985 status.go:174] checking status of ha-107817-m04 ...
	I1101 10:09:34.353006  481985 status.go:371] ha-107817-m04 host status = "Stopped" (err=<nil>)
	I1101 10:09:34.353023  481985 status.go:384] host is not running, skipping remaining checks
	I1101 10:09:34.353029  481985 status.go:176] ha-107817-m04 status: &{Name:ha-107817-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (39.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (109.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 start --wait true --alsologtostderr -v 5 --driver=kvm2 
E1101 10:09:52.344305  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:09:56.336517  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-107817 start --wait true --alsologtostderr -v 5 --driver=kvm2 : (1m49.136433603s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (109.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (86.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-107817 node add --control-plane --alsologtostderr -v 5: (1m25.543455803s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-107817 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (86.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.70s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (42.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-451982 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-451982 --driver=kvm2 : (42.648073519s)
--- PASS: TestImageBuild/serial/Setup (42.65s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-451982
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-451982: (1.767434739s)
--- PASS: TestImageBuild/serial/NormalBuild (1.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-451982
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.00s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-451982
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-451982
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-851713 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 
E1101 10:14:24.644450  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-851713 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 : (1m0.887704414s)
--- PASS: TestJSONOutput/start/Command (60.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-851713 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-851713 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-851713 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-851713 --output=json --user=testUser: (6.696748247s)
--- PASS: TestJSONOutput/stop/Command (6.70s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-900067 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-900067 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (85.117969ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8fc4c633-c787-4871-adfc-a47407309584","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-900067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b49202d-424a-4c11-a59d-6443344b78b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21830"}}
	{"specversion":"1.0","id":"2b4aeb72-86dd-46d8-aa35-c77a360d4028","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fc11ccd5-4109-4c29-8851-67f3ce3c26ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig"}}
	{"specversion":"1.0","id":"8431d7be-6004-47df-ae86-dd54471474b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube"}}
	{"specversion":"1.0","id":"e58167b3-5a91-4306-9696-14dbb8e9bcbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1154cd14-4bb7-4d1c-a859-f359cff75c3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b2ab4743-1f4e-4cd8-a123-a6b270112597","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-900067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-900067
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (89.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-088997 --driver=kvm2 
E1101 10:14:56.335657  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-088997 --driver=kvm2 : (44.003571585s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-091308 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-091308 --driver=kvm2 : (42.432296226s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-088997
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-091308
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-091308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-091308
helpers_test.go:175: Cleaning up "first-088997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-088997
--- PASS: TestMinikubeProfile (89.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-790116 --memory=3072 --mount-string /tmp/TestMountStartserial3696859021/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-790116 --memory=3072 --mount-string /tmp/TestMountStartserial3696859021/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (24.778880054s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-790116 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-790116 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-820071 --memory=3072 --mount-string /tmp/TestMountStartserial3696859021/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-820071 --memory=3072 --mount-string /tmp/TestMountStartserial3696859021/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (24.793953008s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-820071 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-820071 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-790116 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-820071 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-820071 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-820071
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-820071: (1.28424898s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-820071
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-820071: (19.217722934s)
--- PASS: TestMountStart/serial/RestartStopped (20.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-820071 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-820071 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (119.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-692980 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 
E1101 10:19:24.638427  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-692980 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 : (1m58.850418815s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (119.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-692980 -- rollout status deployment/busybox: (3.439116361s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- exec busybox-7b57f96db7-7dfx7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- exec busybox-7b57f96db7-stkst -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- exec busybox-7b57f96db7-7dfx7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- exec busybox-7b57f96db7-stkst -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- exec busybox-7b57f96db7-7dfx7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- exec busybox-7b57f96db7-stkst -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.10s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- exec busybox-7b57f96db7-7dfx7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- exec busybox-7b57f96db7-7dfx7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- exec busybox-7b57f96db7-stkst -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-692980 -- exec busybox-7b57f96db7-stkst -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-692980 -v=5 --alsologtostderr
E1101 10:19:56.336016  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-692980 -v=5 --alsologtostderr: (49.208609012s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.65s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-692980 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 cp testdata/cp-test.txt multinode-692980:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 cp multinode-692980:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3018934060/001/cp-test_multinode-692980.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 cp multinode-692980:/home/docker/cp-test.txt multinode-692980-m02:/home/docker/cp-test_multinode-692980_multinode-692980-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980-m02 "sudo cat /home/docker/cp-test_multinode-692980_multinode-692980-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 cp multinode-692980:/home/docker/cp-test.txt multinode-692980-m03:/home/docker/cp-test_multinode-692980_multinode-692980-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980-m03 "sudo cat /home/docker/cp-test_multinode-692980_multinode-692980-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 cp testdata/cp-test.txt multinode-692980-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 cp multinode-692980-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3018934060/001/cp-test_multinode-692980-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 cp multinode-692980-m02:/home/docker/cp-test.txt multinode-692980:/home/docker/cp-test_multinode-692980-m02_multinode-692980.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980 "sudo cat /home/docker/cp-test_multinode-692980-m02_multinode-692980.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 cp multinode-692980-m02:/home/docker/cp-test.txt multinode-692980-m03:/home/docker/cp-test_multinode-692980-m02_multinode-692980-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980-m03 "sudo cat /home/docker/cp-test_multinode-692980-m02_multinode-692980-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 cp testdata/cp-test.txt multinode-692980-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 cp multinode-692980-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3018934060/001/cp-test_multinode-692980-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 cp multinode-692980-m03:/home/docker/cp-test.txt multinode-692980:/home/docker/cp-test_multinode-692980-m03_multinode-692980.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980 "sudo cat /home/docker/cp-test_multinode-692980-m03_multinode-692980.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 cp multinode-692980-m03:/home/docker/cp-test.txt multinode-692980-m02:/home/docker/cp-test_multinode-692980-m03_multinode-692980-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 ssh -n multinode-692980-m02 "sudo cat /home/docker/cp-test_multinode-692980-m03_multinode-692980-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-692980 node stop m03: (1.762007123s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-692980 status: exit status 7 (331.314771ms)

                                                
                                                
-- stdout --
	multinode-692980
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-692980-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-692980-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-692980 status --alsologtostderr: exit status 7 (324.423703ms)

                                                
                                                
-- stdout --
	multinode-692980
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-692980-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-692980-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:20:40.941103  488484 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:20:40.941208  488484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:20:40.941217  488484 out.go:374] Setting ErrFile to fd 2...
	I1101 10:20:40.941221  488484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:20:40.941395  488484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
	I1101 10:20:40.941574  488484 out.go:368] Setting JSON to false
	I1101 10:20:40.941601  488484 mustload.go:66] Loading cluster: multinode-692980
	I1101 10:20:40.941648  488484 notify.go:221] Checking for updates...
	I1101 10:20:40.941983  488484 config.go:182] Loaded profile config "multinode-692980": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1101 10:20:40.941999  488484 status.go:174] checking status of multinode-692980 ...
	I1101 10:20:40.944420  488484 status.go:371] multinode-692980 host status = "Running" (err=<nil>)
	I1101 10:20:40.944445  488484 host.go:66] Checking if "multinode-692980" exists ...
	I1101 10:20:40.947635  488484 main.go:143] libmachine: domain multinode-692980 has defined MAC address 52:54:00:4a:7b:65 in network mk-multinode-692980
	I1101 10:20:40.948066  488484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:7b:65", ip: ""} in network mk-multinode-692980: {Iface:virbr1 ExpiryTime:2025-11-01 11:17:53 +0000 UTC Type:0 Mac:52:54:00:4a:7b:65 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:multinode-692980 Clientid:01:52:54:00:4a:7b:65}
	I1101 10:20:40.948094  488484 main.go:143] libmachine: domain multinode-692980 has defined IP address 192.168.39.252 and MAC address 52:54:00:4a:7b:65 in network mk-multinode-692980
	I1101 10:20:40.948301  488484 host.go:66] Checking if "multinode-692980" exists ...
	I1101 10:20:40.948559  488484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:20:40.951450  488484 main.go:143] libmachine: domain multinode-692980 has defined MAC address 52:54:00:4a:7b:65 in network mk-multinode-692980
	I1101 10:20:40.952070  488484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:7b:65", ip: ""} in network mk-multinode-692980: {Iface:virbr1 ExpiryTime:2025-11-01 11:17:53 +0000 UTC Type:0 Mac:52:54:00:4a:7b:65 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:multinode-692980 Clientid:01:52:54:00:4a:7b:65}
	I1101 10:20:40.952118  488484 main.go:143] libmachine: domain multinode-692980 has defined IP address 192.168.39.252 and MAC address 52:54:00:4a:7b:65 in network mk-multinode-692980
	I1101 10:20:40.952384  488484 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/multinode-692980/id_rsa Username:docker}
	I1101 10:20:41.031175  488484 ssh_runner.go:195] Run: systemctl --version
	I1101 10:20:41.037453  488484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:20:41.054861  488484 kubeconfig.go:125] found "multinode-692980" server: "https://192.168.39.252:8443"
	I1101 10:20:41.054906  488484 api_server.go:166] Checking apiserver status ...
	I1101 10:20:41.054970  488484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:20:41.075047  488484 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2494/cgroup
	W1101 10:20:41.087843  488484 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2494/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:20:41.087945  488484 ssh_runner.go:195] Run: ls
	I1101 10:20:41.093003  488484 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I1101 10:20:41.098630  488484 api_server.go:279] https://192.168.39.252:8443/healthz returned 200:
	ok
	I1101 10:20:41.098679  488484 status.go:463] multinode-692980 apiserver status = Running (err=<nil>)
	I1101 10:20:41.098694  488484 status.go:176] multinode-692980 status: &{Name:multinode-692980 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:20:41.098719  488484 status.go:174] checking status of multinode-692980-m02 ...
	I1101 10:20:41.100419  488484 status.go:371] multinode-692980-m02 host status = "Running" (err=<nil>)
	I1101 10:20:41.100443  488484 host.go:66] Checking if "multinode-692980-m02" exists ...
	I1101 10:20:41.102883  488484 main.go:143] libmachine: domain multinode-692980-m02 has defined MAC address 52:54:00:fa:64:97 in network mk-multinode-692980
	I1101 10:20:41.103296  488484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fa:64:97", ip: ""} in network mk-multinode-692980: {Iface:virbr1 ExpiryTime:2025-11-01 11:19:00 +0000 UTC Type:0 Mac:52:54:00:fa:64:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-692980-m02 Clientid:01:52:54:00:fa:64:97}
	I1101 10:20:41.103325  488484 main.go:143] libmachine: domain multinode-692980-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:fa:64:97 in network mk-multinode-692980
	I1101 10:20:41.103466  488484 host.go:66] Checking if "multinode-692980-m02" exists ...
	I1101 10:20:41.103699  488484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:20:41.106101  488484 main.go:143] libmachine: domain multinode-692980-m02 has defined MAC address 52:54:00:fa:64:97 in network mk-multinode-692980
	I1101 10:20:41.106469  488484 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fa:64:97", ip: ""} in network mk-multinode-692980: {Iface:virbr1 ExpiryTime:2025-11-01 11:19:00 +0000 UTC Type:0 Mac:52:54:00:fa:64:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-692980-m02 Clientid:01:52:54:00:fa:64:97}
	I1101 10:20:41.106496  488484 main.go:143] libmachine: domain multinode-692980-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:fa:64:97 in network mk-multinode-692980
	I1101 10:20:41.106641  488484 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-464466/.minikube/machines/multinode-692980-m02/id_rsa Username:docker}
	I1101 10:20:41.183972  488484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:20:41.200725  488484 status.go:176] multinode-692980-m02 status: &{Name:multinode-692980-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:20:41.200773  488484 status.go:174] checking status of multinode-692980-m03 ...
	I1101 10:20:41.202648  488484 status.go:371] multinode-692980-m03 host status = "Stopped" (err=<nil>)
	I1101 10:20:41.202679  488484 status.go:384] host is not running, skipping remaining checks
	I1101 10:20:41.202688  488484 status.go:176] multinode-692980-m03 status: &{Name:multinode-692980-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 node start m03 -v=5 --alsologtostderr
E1101 10:20:47.705696  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-692980 node start m03 -v=5 --alsologtostderr: (40.78578333s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (154.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-692980
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-692980
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-692980: (27.005859263s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-692980 --wait=true -v=5 --alsologtostderr
E1101 10:22:59.408219  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-692980 --wait=true -v=5 --alsologtostderr: (2m7.597276026s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-692980
--- PASS: TestMultiNode/serial/RestartKeepsNodes (154.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-692980 node delete m03: (1.681333959s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (26.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 stop
E1101 10:24:24.644940  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-692980 stop: (26.261114441s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-692980 status: exit status 7 (66.104492ms)

                                                
                                                
-- stdout --
	multinode-692980
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-692980-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-692980 status --alsologtostderr: exit status 7 (67.042239ms)

                                                
                                                
-- stdout --
	multinode-692980
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-692980-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:24:25.782642  489811 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:24:25.782924  489811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:24:25.782933  489811 out.go:374] Setting ErrFile to fd 2...
	I1101 10:24:25.782937  489811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:24:25.783158  489811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-464466/.minikube/bin
	I1101 10:24:25.783320  489811 out.go:368] Setting JSON to false
	I1101 10:24:25.783346  489811 mustload.go:66] Loading cluster: multinode-692980
	I1101 10:24:25.783580  489811 notify.go:221] Checking for updates...
	I1101 10:24:25.784649  489811 config.go:182] Loaded profile config "multinode-692980": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1101 10:24:25.784687  489811 status.go:174] checking status of multinode-692980 ...
	I1101 10:24:25.787661  489811 status.go:371] multinode-692980 host status = "Stopped" (err=<nil>)
	I1101 10:24:25.787681  489811 status.go:384] host is not running, skipping remaining checks
	I1101 10:24:25.787687  489811 status.go:176] multinode-692980 status: &{Name:multinode-692980 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:24:25.787708  489811 status.go:174] checking status of multinode-692980-m02 ...
	I1101 10:24:25.789198  489811 status.go:371] multinode-692980-m02 host status = "Stopped" (err=<nil>)
	I1101 10:24:25.789215  489811 status.go:384] host is not running, skipping remaining checks
	I1101 10:24:25.789220  489811 status.go:176] multinode-692980-m02 status: &{Name:multinode-692980-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (26.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (90.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-692980 --wait=true -v=5 --alsologtostderr --driver=kvm2 
E1101 10:24:56.336606  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-692980 --wait=true -v=5 --alsologtostderr --driver=kvm2 : (1m30.185036179s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-692980 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (90.67s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-692980
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-692980-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-692980-m02 --driver=kvm2 : exit status 14 (85.511173ms)

                                                
                                                
-- stdout --
	* [multinode-692980-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-692980-m02' is duplicated with machine name 'multinode-692980-m02' in profile 'multinode-692980'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-692980-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-692980-m03 --driver=kvm2 : (44.312813089s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-692980
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-692980: exit status 80 (209.248991ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-692980 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-692980-m03 already exists in multinode-692980-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-692980-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.50s)

                                                
                                    
x
+
TestPreload (204.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-447373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-447373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.0: (1m5.984135783s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-447373 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-447373 image pull gcr.io/k8s-minikube/busybox: (2.378836424s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-447373
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-447373: (13.332335532s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-447373 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1101 10:29:24.637241  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:29:56.335773  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-447373 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (2m1.844461351s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-447373 image list
helpers_test.go:175: Cleaning up "test-preload-447373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-447373
--- PASS: TestPreload (204.57s)

                                                
                                    
x
+
TestScheduledStopUnix (113.61s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-552039 --memory=3072 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-552039 --memory=3072 --driver=kvm2 : (41.890689721s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-552039 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-552039 -n scheduled-stop-552039
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-552039 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1101 10:30:50.304168  468355 retry.go:31] will retry after 95.483µs: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.305362  468355 retry.go:31] will retry after 96.684µs: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.306524  468355 retry.go:31] will retry after 130.947µs: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.307707  468355 retry.go:31] will retry after 279.002µs: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.308875  468355 retry.go:31] will retry after 679.597µs: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.310023  468355 retry.go:31] will retry after 1.079495ms: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.311169  468355 retry.go:31] will retry after 1.000195ms: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.312331  468355 retry.go:31] will retry after 1.69889ms: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.314598  468355 retry.go:31] will retry after 2.753393ms: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.317864  468355 retry.go:31] will retry after 5.732272ms: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.324147  468355 retry.go:31] will retry after 4.093308ms: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.328367  468355 retry.go:31] will retry after 8.618837ms: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.337671  468355 retry.go:31] will retry after 15.092072ms: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.352907  468355 retry.go:31] will retry after 21.067555ms: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.374145  468355 retry.go:31] will retry after 30.749387ms: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
I1101 10:30:50.405488  468355 retry.go:31] will retry after 27.268344ms: open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/scheduled-stop-552039/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-552039 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-552039 -n scheduled-stop-552039
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-552039
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-552039 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-552039
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-552039: exit status 7 (69.736578ms)

                                                
                                                
-- stdout --
	scheduled-stop-552039
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-552039 -n scheduled-stop-552039
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-552039 -n scheduled-stop-552039: exit status 7 (68.093935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-552039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-552039
--- PASS: TestScheduledStopUnix (113.61s)

                                                
                                    
x
+
TestSkaffold (127.2s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1193470122 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-698543 --memory=3072 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-698543 --memory=3072 --driver=kvm2 : (42.69793785s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1193470122 run --minikube-profile skaffold-698543 --kube-context skaffold-698543 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1193470122 run --minikube-profile skaffold-698543 --kube-context skaffold-698543 --status-check=true --port-forward=false --interactive=false: (1m9.246191514s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-58598bcb7b-cxt25" [9c2b13a7-1732-4dc0-b4f9-952bdbbbc4cc] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004877612s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-56c5bcbd77-8f7dm" [cafba8e3-9ef7-436f-ac5d-256aff9d44ca] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003838643s
helpers_test.go:175: Cleaning up "skaffold-698543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-698543
--- PASS: TestSkaffold (127.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (130.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.752478412 start -p running-upgrade-755402 --memory=3072 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.752478412 start -p running-upgrade-755402 --memory=3072 --vm-driver=kvm2 : (1m21.901065759s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-755402 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-755402 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (44.917897277s)
helpers_test.go:175: Cleaning up "running-upgrade-755402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-755402
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-755402: (1.047398168s)
--- PASS: TestRunningBinaryUpgrade (130.51s)

                                                
                                    
x
+
TestKubernetesUpgrade (189.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-455369 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-455369 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 : (58.58482397s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-455369
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-455369: (13.036780357s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-455369 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-455369 status --format={{.Host}}: exit status 7 (79.881818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-455369 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-455369 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2 : (46.266877836s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-455369 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-455369 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-455369 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 106 (120.731613ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-455369] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-455369
	    minikube start -p kubernetes-upgrade-455369 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4553692 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-455369 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-455369 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-455369 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2 : (1m9.844718557s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-455369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-455369
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-455369: (1.052583372s)
--- PASS: TestKubernetesUpgrade (189.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (94.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2972141672 start -p stopped-upgrade-147885 --memory=3072 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2972141672 start -p stopped-upgrade-147885 --memory=3072 --vm-driver=kvm2 : (56.256423648s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2972141672 -p stopped-upgrade-147885 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2972141672 -p stopped-upgrade-147885 stop: (3.678326024s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-147885 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
E1101 10:37:27.708014  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-147885 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (34.89657149s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (94.83s)

                                                
                                    
x
+
TestPause/serial/Start (59.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-374456 --memory=3072 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-374456 --memory=3072 --install-addons=false --wait=all --driver=kvm2 : (59.927912101s)
--- PASS: TestPause/serial/Start (59.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-147885
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-147885: (1.153108863s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-374456 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-374456 --alsologtostderr -v=1 --driver=kvm2 : (56.481745787s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (56.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-121297 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-121297 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 14 (95.787827ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-121297] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-464466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-464466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-121297 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-121297 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (48.240245716s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-121297 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-121297 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
E1101 10:39:37.810203  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:39:39.409615  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-121297 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (20.335452402s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-121297 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-121297 status -o json: exit status 2 (246.296846ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-121297","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-121297
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-121297: (1.033285533s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.62s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-374456 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-374456 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-374456 --output=json --layout=cluster: exit status 2 (304.521137ms)

                                                
                                                
-- stdout --
	{"Name":"pause-374456","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-374456","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-374456 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-374456 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-374456 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.7s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.70s)

                                                
                                    
x
+
TestISOImage/Setup (28.05s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p guest-575214 --no-kubernetes --driver=kvm2 
E1101 10:39:56.336339  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/addons-171954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
iso_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p guest-575214 --no-kubernetes --driver=kvm2 : (28.049376542s)
--- PASS: TestISOImage/Setup (28.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (54.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-121297 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-121297 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (54.736529626s)
--- PASS: TestNoKubernetes/serial/Start (54.74s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.23s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.23s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.32s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.32s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.23s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.23s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.23s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.23s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.23s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.23s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "which VBoxControl"
E1101 10:48:35.133253  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/VBoxControl (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m40.964207375s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (128.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E1101 10:40:18.772000  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (2m8.686079681s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (128.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-121297 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-121297 "sudo systemctl is-active --quiet service kubelet": exit status 1 (180.463581ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-121297
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-121297: (1.389939982s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (57.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-121297 --driver=kvm2 
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-121297 --driver=kvm2 : (57.65510123s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (57.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (141.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E1101 10:41:40.693973  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m21.339654635s)
--- PASS: TestNetworkPlugins/group/calico/Start (141.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-121297 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-121297 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.912302ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-252788 "pgrep -a kubelet"
I1101 10:41:54.778564  468355 config.go:182] Loaded profile config "auto-252788": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (16.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-252788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h8cst" [aa47bd12-fff9-442b-a7f1-7eb77fdc157c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h8cst" [aa47bd12-fff9-442b-a7f1-7eb77fdc157c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 16.004196546s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (16.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (89.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m29.29331269s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (89.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-252788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-sdztq" [c00fb190-a10b-4953-ba06-ba0fab444f16] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005971425s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (74.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m14.572505235s)
--- PASS: TestNetworkPlugins/group/false/Start (74.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-252788 "pgrep -a kubelet"
I1101 10:42:28.912909  468355 config.go:182] Loaded profile config "kindnet-252788": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (33.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-252788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9x7ms" [96910f0e-6f7f-407e-aaf2-aca662ea190b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9x7ms" [96910f0e-6f7f-407e-aaf2-aca662ea190b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 33.006281951s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (33.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-252788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-rszhx" [ef40a5a3-db72-45f4-acf8-5a83351793ee] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006218753s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m10.560856332s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-252788 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-252788 "pgrep -a kubelet"
I1101 10:43:24.547300  468355 config.go:182] Loaded profile config "custom-flannel-252788": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-252788 replace --force -f testdata/netcat-deployment.yaml
I1101 10:43:24.629478  468355 config.go:182] Loaded profile config "calico-252788": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-48t6h" [40778c53-de1f-417c-9883-7f72aca787bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-48t6h" [40778c53-de1f-417c-9883-7f72aca787bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004049123s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-252788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wvdcp" [412a157e-61c3-4bc3-91e0-7173f33fbfb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wvdcp" [412a157e-61c3-4bc3-91e0-7173f33fbfb6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005579974s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-252788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-252788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-252788 "pgrep -a kubelet"
I1101 10:43:42.840334  468355 config.go:182] Loaded profile config "false-252788": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-252788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zhrqc" [c3d6f0e3-6c23-4656-a124-5f183a26226d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zhrqc" [c3d6f0e3-6c23-4656-a124-5f183a26226d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.005417978s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-252788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m10.021850031s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1101 10:44:01.818443  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:01.825536  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:01.837064  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:01.858941  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:01.900244  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:01.981795  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:02.143156  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:02.465210  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:03.106881  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:04.388927  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:06.951286  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m26.847666826s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (119.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E1101 10:44:12.073339  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:22.314764  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:24.535557  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/skaffold-698543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:24.637280  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-252788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m59.566414763s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (119.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-252788 "pgrep -a kubelet"
I1101 10:44:29.977536  468355 config.go:182] Loaded profile config "enable-default-cni-252788": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (20.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-252788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-75vvw" [bd1e4c25-8433-45c6-bf04-2cdce34d612b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 10:44:42.796325  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-75vvw" [bd1e4c25-8433-45c6-bf04-2cdce34d612b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 20.005151745s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (20.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-252788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-2gzb4" [c4db5430-8668-48a7-9cf7-4f7986b20d15] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.166339975s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (65.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-905767 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-905767 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (1m5.166318446s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (65.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-252788 "pgrep -a kubelet"
I1101 10:45:11.200369  468355 config.go:182] Loaded profile config "flannel-252788": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-252788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zldnz" [a35b56d1-d95a-434a-84ca-dd057fcc3cdb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zldnz" [a35b56d1-d95a-434a-84ca-dd057fcc3cdb] Running
E1101 10:45:23.758282  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.005038955s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-252788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-252788 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
I1101 10:45:25.139997  468355 config.go:182] Loaded profile config "bridge-252788": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-252788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dd282" [8f33da8e-d60f-4072-ba71-bede01126a36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dd282" [8f33da8e-d60f-4072-ba71-bede01126a36] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.060761875s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-252788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-656639 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-656639 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.34.1: (1m19.363269244s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-709494 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-709494 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.1: (1m18.506560162s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-252788 "pgrep -a kubelet"
I1101 10:46:10.014570  468355 config.go:182] Loaded profile config "kubenet-252788": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-252788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qd8fq" [ca7d42b3-e753-416a-9271-f0034773a148] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qd8fq" [ca7d42b3-e753-416a-9271-f0034773a148] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.00375079s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-905767 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [80b124e8-6614-411e-87da-498c203e230e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [80b124e8-6614-411e-87da-498c203e230e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004657331s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-905767 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-905767 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-905767 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.295683589s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-905767 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-252788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-252788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-905767 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-905767 --alsologtostderr -v=3: (13.986670775s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905767 -n old-k8s-version-905767
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905767 -n old-k8s-version-905767: exit status 7 (75.054213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-905767 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-905767 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-905767 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (45.606600189s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-905767 -n old-k8s-version-905767
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-327401 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.1
E1101 10:46:45.680265  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:46:55.060350  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:46:55.066877  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:46:55.078354  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:46:55.099936  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:46:55.141423  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:46:55.222941  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:46:55.384389  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:46:55.705933  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:46:56.347640  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:46:57.629600  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:47:00.190988  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-327401 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.1: (1m23.048614947s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-656639 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [455b903d-bd64-4210-8448-d3b581ac536c] Pending
helpers_test.go:352: "busybox" [455b903d-bd64-4210-8448-d3b581ac536c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1101 10:47:05.313226  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [455b903d-bd64-4210-8448-d3b581ac536c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005585862s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-656639 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-709494 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1d1be9a6-e500-4202-86cf-0485399b9b12] Pending
helpers_test.go:352: "busybox" [1d1be9a6-e500-4202-86cf-0485399b9b12] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1d1be9a6-e500-4202-86cf-0485399b9b12] Running
E1101 10:47:22.729773  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:47:22.736262  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:47:22.747756  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:47:22.769330  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:47:22.810849  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004773643s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-709494 exec busybox -- /bin/sh -c "ulimit -n"
E1101 10:47:22.892900  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-656639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-656639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.010822536s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-656639 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-656639 --alsologtostderr -v=3
E1101 10:47:15.555469  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-656639 --alsologtostderr -v=3: (14.684807066s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (14.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-709494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1101 10:47:23.054795  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:47:23.376650  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:47:24.018976  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-709494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.178733866s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-709494 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (14.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-709494 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-709494 --alsologtostderr -v=3: (14.74809931s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (14.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-k6w7m" [119055de-dbf4-4bd0-9c3a-2488398269c3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1101 10:47:25.301136  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:47:27.863031  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-k6w7m" [119055de-dbf4-4bd0-9c3a-2488398269c3] Running
E1101 10:47:32.984969  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:47:36.037063  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.005106018s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-656639 -n no-preload-656639
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-656639 -n no-preload-656639: exit status 7 (74.076754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-656639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-656639 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-656639 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.34.1: (48.272186782s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-656639 -n no-preload-656639
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-k6w7m" [119055de-dbf4-4bd0-9c3a-2488398269c3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005214987s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-905767 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709494 -n embed-certs-709494
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709494 -n embed-certs-709494: exit status 7 (89.60836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-709494 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (58.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-709494 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-709494 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.1: (58.254121099s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709494 -n embed-certs-709494
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (58.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-905767 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-905767 --alsologtostderr -v=1
E1101 10:47:43.226753  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-905767 -n old-k8s-version-905767
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-905767 -n old-k8s-version-905767: exit status 2 (241.23141ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-905767 -n old-k8s-version-905767
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-905767 -n old-k8s-version-905767: exit status 2 (243.920629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-905767 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-905767 -n old-k8s-version-905767
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-905767 -n old-k8s-version-905767
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (74.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-948742 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.34.1
E1101 10:48:03.708444  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-948742 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.34.1: (1m14.909213153s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (74.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-327401 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6d955263-ec0f-47bf-98f5-23d3fa254e76] Pending
helpers_test.go:352: "busybox" [6d955263-ec0f-47bf-98f5-23d3fa254e76] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6d955263-ec0f-47bf-98f5-23d3fa254e76] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003570717s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-327401 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-327401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-327401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.045584313s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-327401 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (14.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-327401 --alsologtostderr -v=3
E1101 10:48:16.999089  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/auto-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-327401 --alsologtostderr -v=3: (14.136124841s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (14.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qvpvz" [d21eae3c-7d74-4905-96d1-d0f0074044da] Running
E1101 10:48:18.411824  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/calico-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:18.418467  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/calico-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:18.429994  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/calico-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:18.451927  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/calico-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:18.493472  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/calico-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:18.575321  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/calico-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:18.737096  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/calico-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:19.058421  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/calico-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:19.699785  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/calico-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:20.981267  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/calico-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:23.542604  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/calico-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004071436s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qvpvz" [d21eae3c-7d74-4905-96d1-d0f0074044da] Running
E1101 10:48:24.878752  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:24.885272  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:24.896730  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:24.918226  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:24.959835  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:25.041366  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:25.202742  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:25.524497  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:26.166879  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:27.449206  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:28.664035  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/calico-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004941368s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-656639 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-656639 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-656639 --alsologtostderr -v=1
E1101 10:48:30.011032  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-656639 -n no-preload-656639
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-656639 -n no-preload-656639: exit status 2 (272.628173ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-656639 -n no-preload-656639
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-656639 -n no-preload-656639: exit status 2 (300.057023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-656639 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-656639 -n no-preload-656639
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-656639 -n no-preload-656639
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-327401 -n default-k8s-diff-port-327401
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-327401 -n default-k8s-diff-port-327401: exit status 7 (78.38176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-327401 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-327401 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-327401 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.1: (51.499632145s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-327401 -n default-k8s-diff-port-327401
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.73s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
E1101 10:48:38.905443  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/calico-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.49s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.49s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.32s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.32s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.47s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.47s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.49s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qz9dn" [de169684-c0b1-46ed-9d38-81fdf1d93270] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qz9dn" [de169684-c0b1-46ed-9d38-81fdf1d93270] Running
E1101 10:48:43.077093  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:43.083613  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:43.095115  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:43.116582  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:43.158224  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:43.239797  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:43.401485  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:43.723308  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:44.365462  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:44.670363  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/kindnet-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:45.375054  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:48:45.646988  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004335788s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.19s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p guest-575214 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qz9dn" [de169684-c0b1-46ed-9d38-81fdf1d93270] Running
E1101 10:48:48.208968  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004685694s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-709494 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-709494 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-709494 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-709494 -n embed-certs-709494
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-709494 -n embed-certs-709494: exit status 2 (258.204802ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-709494 -n embed-certs-709494
E1101 10:48:53.330637  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-709494 -n embed-certs-709494: exit status 2 (263.27208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-709494 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-709494 -n embed-certs-709494
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-709494 -n embed-certs-709494
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-948742 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1101 10:49:03.572501  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-948742 --alsologtostderr -v=3
E1101 10:49:05.856461  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/custom-flannel-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-948742 --alsologtostderr -v=3: (13.565755166s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-948742 -n newest-cni-948742
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-948742 -n newest-cni-948742: exit status 7 (70.096914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-948742 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-948742 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-948742 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.34.1: (32.579788334s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-948742 -n newest-cni-948742
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kmc44" [4205ce93-c0f1-4574-83b6-3a9bccb642be] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kmc44" [4205ce93-c0f1-4574-83b6-3a9bccb642be] Running
E1101 10:49:24.054108  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/false-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:49:24.636906  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/functional-498549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004445489s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kmc44" [4205ce93-c0f1-4574-83b6-3a9bccb642be] Running
E1101 10:49:29.521907  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/gvisor-392081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:49:30.223013  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/enable-default-cni-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:49:30.229581  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/enable-default-cni-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:49:30.241096  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/enable-default-cni-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:49:30.262596  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/enable-default-cni-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:49:30.304615  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/enable-default-cni-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:49:30.386218  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/enable-default-cni-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:49:30.547905  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/enable-default-cni-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:49:30.869744  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/enable-default-cni-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:49:31.511765  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/enable-default-cni-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:49:32.794010  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/enable-default-cni-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005506431s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-327401 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-327401 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-327401 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-327401 -n default-k8s-diff-port-327401
E1101 10:49:35.355402  468355 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-464466/.minikube/profiles/enable-default-cni-252788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-327401 -n default-k8s-diff-port-327401: exit status 2 (227.916073ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-327401 -n default-k8s-diff-port-327401
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-327401 -n default-k8s-diff-port-327401: exit status 2 (223.463296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-327401 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-327401 -n default-k8s-diff-port-327401
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-327401 -n default-k8s-diff-port-327401
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-948742 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-948742 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-948742 -n newest-cni-948742
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-948742 -n newest-cni-948742: exit status 2 (217.281515ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-948742 -n newest-cni-948742
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-948742 -n newest-cni-948742: exit status 2 (219.698095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-948742 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-948742 -n newest-cni-948742
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-948742 -n newest-cni-948742
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.27s)

                                                
                                    

Test skip (34/364)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
187 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
215 TestKicCustomNetwork 0
216 TestKicExistingNetwork 0
217 TestKicCustomSubnet 0
218 TestKicStaticIP 0
250 TestChangeNoneUser 0
253 TestScheduledStopWindows 0
257 TestInsufficientStorage 0
261 TestMissingContainerUpgrade 0
272 TestNetworkPlugins/group/cilium 4.04
278 TestStartStop/group/disable-driver-mounts 0.23
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-252788 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-252788" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-252788

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-252788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-252788"

                                                
                                                
----------------------- debugLogs end: cilium-252788 [took: 3.863805339s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-252788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-252788
--- SKIP: TestNetworkPlugins/group/cilium (4.04s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-593327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-593327
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard