Test Report: KVM_Linux_crio 22094

                    
                      4d318e45b0dac190a241a23c5ddc63ef7c67bab3:2025-12-10:42711
                    
                

Test fail (16/431)

x
+
TestAddons/parallel/Registry (362.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 10.94946ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-lkhvn" [0a8387d7-19c7-49cd-8425-48c60f2e70ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
helpers_test.go:338: TestAddons/parallel/Registry: WARNING: pod list for "kube-system" "actual-registry=true" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:386: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-819501 -n addons-819501
addons_test.go:386: TestAddons/parallel/Registry: showing logs for failed pods as of 2025-12-10 05:42:18.613760345 +0000 UTC m=+822.665212472
addons_test.go:386: (dbg) Run:  kubectl --context addons-819501 describe po registry-6b586f9694-lkhvn -n kube-system
addons_test.go:386: (dbg) kubectl --context addons-819501 describe po registry-6b586f9694-lkhvn -n kube-system:
Name:             registry-6b586f9694-lkhvn
Namespace:        kube-system
Priority:         0
Service Account:  default
Node:             addons-819501/192.168.50.227
Start Time:       Wed, 10 Dec 2025 05:29:53 +0000
Labels:           actual-registry=true
addonmanager.kubernetes.io/mode=Reconcile
kubernetes.io/minikube-addons=registry
pod-template-hash=6b586f9694
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/registry-6b586f9694
Containers:
registry:
Container ID:   
Image:          docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e
Image ID:       
Port:           5000/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
REGISTRY_STORAGE_DELETE_ENABLED:  true
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hj7fv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hj7fv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason                           Age                   From               Message
----     ------                           ----                  ----               -------
Normal   Scheduled                        12m                   default-scheduler  Successfully assigned kube-system/registry-6b586f9694-lkhvn to addons-819501
Warning  Failed                           9m2s                  kubelet            Failed to pull image "docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e": reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed                           7m34s (x3 over 11m)   kubelet            Failed to pull image "docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e": fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed                           7m34s (x4 over 11m)   kubelet            Error: ErrImagePull
Warning  Failed                           7m11s (x7 over 11m)   kubelet            Error: ImagePullBackOff
Normal   Pulling                          6m14s (x5 over 12m)   kubelet            Pulling image "docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e"
Warning  FailedToRetrieveImagePullSecret  2m21s (x28 over 12m)  kubelet            Unable to retrieve some image pull secrets (gcp-auth); attempting to pull the image may not succeed.
Normal   BackOff                          2m8s (x23 over 11m)   kubelet            Back-off pulling image "docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e"
addons_test.go:386: (dbg) Run:  kubectl --context addons-819501 logs registry-6b586f9694-lkhvn -n kube-system
addons_test.go:386: (dbg) Non-zero exit: kubectl --context addons-819501 logs registry-6b586f9694-lkhvn -n kube-system: exit status 1 (85.978272ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "registry" in pod "registry-6b586f9694-lkhvn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:386: kubectl --context addons-819501 logs registry-6b586f9694-lkhvn -n kube-system: exit status 1
addons_test.go:387: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Registry]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-819501 -n addons-819501
helpers_test.go:253: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-819501 logs -n 25: (1.383275002s)
helpers_test.go:261: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-829998                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-829998 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-160810                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-160810 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ start   │ --download-only -p binary-mirror-177372 --alsologtostderr --binary-mirror http://127.0.0.1:39073 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-177372 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ -p binary-mirror-177372                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-177372 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ addons  │ disable dashboard -p addons-819501                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-819501                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ start   │ -p addons-819501 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:35 UTC │
	│ addons  │ addons-819501 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:35 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ enable headlamp -p addons-819501 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:37 UTC │
	│ ssh     │ addons-819501 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │                     │
	│ addons  │ addons-819501 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ addons  │ addons-819501 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ addons  │ addons-819501 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-819501                                                                                                                                                                                                                                                                                                                                                                                         │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ addons  │ addons-819501 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ ip      │ addons-819501 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ addons  │ addons-819501 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ addons  │ addons-819501 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ addons  │ addons-819501 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:41 UTC │ 10 Dec 25 05:41 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:28:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:28:52.105088  248270 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:28:52.105179  248270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:52.105184  248270 out.go:374] Setting ErrFile to fd 2...
	I1210 05:28:52.105188  248270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:52.105358  248270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 05:28:52.105849  248270 out.go:368] Setting JSON to false
	I1210 05:28:52.106664  248270 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25879,"bootTime":1765318653,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:28:52.106723  248270 start.go:143] virtualization: kvm guest
	I1210 05:28:52.108609  248270 out.go:179] * [addons-819501] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:28:52.110355  248270 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:28:52.110393  248270 notify.go:221] Checking for updates...
	I1210 05:28:52.112643  248270 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:28:52.114625  248270 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 05:28:52.115949  248270 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:28:52.117122  248270 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:28:52.118420  248270 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:28:52.119836  248270 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:28:52.150913  248270 out.go:179] * Using the kvm2 driver based on user configuration
	I1210 05:28:52.152295  248270 start.go:309] selected driver: kvm2
	I1210 05:28:52.152312  248270 start.go:927] validating driver "kvm2" against <nil>
	I1210 05:28:52.152325  248270 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:28:52.153083  248270 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:28:52.153343  248270 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:28:52.153369  248270 cni.go:84] Creating CNI manager for ""
	I1210 05:28:52.153432  248270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:28:52.153449  248270 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 05:28:52.153504  248270 start.go:353] cluster config:
	{Name:addons-819501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1210 05:28:52.153618  248270 iso.go:125] acquiring lock: {Name:mkd598cf63ca899d26ff5ae5308f8a58215a80b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.155323  248270 out.go:179] * Starting "addons-819501" primary control-plane node in "addons-819501" cluster
	I1210 05:28:52.156436  248270 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 05:28:52.175813  248270 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1210 05:28:52.189575  248270 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 05:28:52.189954  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:52.189997  248270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/config.json ...
	I1210 05:28:52.190028  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/config.json: {Name:mk888fb8e14ee6a18b9f0bd32a9670b388cb1bbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:28:52.190232  248270 start.go:360] acquireMachinesLock for addons-819501: {Name:mk2161deb194f56aae2b0559c12fd0eb56fd317d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 05:28:52.190306  248270 start.go:364] duration metric: took 53.257µs to acquireMachinesLock for "addons-819501"
	I1210 05:28:52.190335  248270 start.go:93] Provisioning new machine with config: &{Name:addons-819501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:28:52.190423  248270 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 05:28:52.192350  248270 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1210 05:28:52.192578  248270 start.go:159] libmachine.API.Create for "addons-819501" (driver="kvm2")
	I1210 05:28:52.192614  248270 client.go:173] LocalClient.Create starting
	I1210 05:28:52.192740  248270 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem
	I1210 05:28:52.332984  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:52.335651  248270 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem
	I1210 05:28:52.470748  248270 main.go:143] libmachine: creating domain...
	I1210 05:28:52.470771  248270 main.go:143] libmachine: creating network...
	I1210 05:28:52.472517  248270 main.go:143] libmachine: found existing default network
	I1210 05:28:52.472684  248270 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 05:28:52.473207  248270 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:1f:09} reservation:<nil>}
	I1210 05:28:52.473578  248270 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cefc30}
	I1210 05:28:52.473663  248270 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-819501</name>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 05:28:52.479464  248270 main.go:143] libmachine: creating private network mk-addons-819501 192.168.50.0/24...
	I1210 05:28:52.482923  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:52.558222  248270 main.go:143] libmachine: private network mk-addons-819501 192.168.50.0/24 created
	I1210 05:28:52.558553  248270 main.go:143] libmachine: <network>
	  <name>mk-addons-819501</name>
	  <uuid>c2bdce80-7332-4fd7-b021-02079a969afe</uuid>
	  <bridge name='virbr2' stp='on' delay='0'/>
	  <mac address='52:54:00:ec:cf:14'/>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 05:28:52.558584  248270 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501 ...
	I1210 05:28:52.558604  248270 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22094-243461/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1210 05:28:52.558615  248270 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:28:52.558685  248270 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22094-243461/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22094-243461/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1210 05:28:52.641062  248270 cache.go:107] acquiring lock: {Name:mk4f601fcccaa8421d9a471640a96feb5df57ae1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641095  248270 cache.go:107] acquiring lock: {Name:mka12e8a345a6dc24c0da40f31d69a169b73fc8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641054  248270 cache.go:107] acquiring lock: {Name:mk8a2b7c7103ad9b74ce0f1af971a5d8da1c8f6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641134  248270 cache.go:107] acquiring lock: {Name:mk72740fe8a4d4eb6e3ad18d28ff308f87f86eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641137  248270 cache.go:107] acquiring lock: {Name:mkc558d20fc07b350030510216ebcf1d2df4b57b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641148  248270 cache.go:107] acquiring lock: {Name:mkca46313d0e39171add494fd1f96b98422fb511 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641203  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 05:28:52.641216  248270 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 161.132µs
	I1210 05:28:52.641226  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:28:52.641234  248270 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 05:28:52.641227  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 05:28:52.641058  248270 cache.go:107] acquiring lock: {Name:mkc561f0208895e5efe372932a5a00136ddcb2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641242  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 05:28:52.641248  248270 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 118.214µs
	I1210 05:28:52.641254  248270 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 166.04µs
	I1210 05:28:52.641260  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 05:28:52.641263  248270 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 05:28:52.641265  248270 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 05:28:52.641279  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 05:28:52.641279  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 05:28:52.641279  248270 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 163.887µs
	I1210 05:28:52.641287  248270 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 246.182µs
	I1210 05:28:52.641301  248270 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 05:28:52.641294  248270 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 05:28:52.641293  248270 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 160.457µs
	I1210 05:28:52.641270  248270 cache.go:107] acquiring lock: {Name:mk2d5c3355eb914434f77fe8a549e7e27d61d8ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641237  248270 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 216.984µs
	I1210 05:28:52.641445  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:28:52.641457  248270 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 230.713µs
	I1210 05:28:52.641467  248270 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:28:52.641313  248270 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 05:28:52.641463  248270 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:28:52.641504  248270 cache.go:87] Successfully saved all images to host disk.
	I1210 05:28:52.824390  248270 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa...
	I1210 05:28:52.868316  248270 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/addons-819501.rawdisk...
	I1210 05:28:52.868386  248270 main.go:143] libmachine: Writing magic tar header
	I1210 05:28:52.868426  248270 main.go:143] libmachine: Writing SSH key tar header
	I1210 05:28:52.868507  248270 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501 ...
	I1210 05:28:52.868575  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501
	I1210 05:28:52.868607  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501 (perms=drwx------)
	I1210 05:28:52.868619  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube/machines
	I1210 05:28:52.868633  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube/machines (perms=drwxr-xr-x)
	I1210 05:28:52.868644  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:28:52.868656  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube (perms=drwxr-xr-x)
	I1210 05:28:52.868664  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461
	I1210 05:28:52.868674  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461 (perms=drwxrwxr-x)
	I1210 05:28:52.868688  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1210 05:28:52.868698  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 05:28:52.868706  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1210 05:28:52.868716  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 05:28:52.868725  248270 main.go:143] libmachine: checking permissions on dir: /home
	I1210 05:28:52.868733  248270 main.go:143] libmachine: skipping /home - not owner
	I1210 05:28:52.868738  248270 main.go:143] libmachine: defining domain...
	I1210 05:28:52.870229  248270 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-819501</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/addons-819501.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-819501'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1210 05:28:52.875406  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:bd:43:bd in network default
	I1210 05:28:52.876104  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:52.876118  248270 main.go:143] libmachine: starting domain...
	I1210 05:28:52.876122  248270 main.go:143] libmachine: ensuring networks are active...
	I1210 05:28:52.877154  248270 main.go:143] libmachine: Ensuring network default is active
	I1210 05:28:52.877579  248270 main.go:143] libmachine: Ensuring network mk-addons-819501 is active
	I1210 05:28:52.878149  248270 main.go:143] libmachine: getting domain XML...
	I1210 05:28:52.879236  248270 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-819501</name>
	  <uuid>ba6b4ebf-a050-46a9-ba18-2a04e8831219</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/addons-819501.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:0b:26:32'/>
	      <source network='mk-addons-819501'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:bd:43:bd'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1210 05:28:54.168962  248270 main.go:143] libmachine: waiting for domain to start...
	I1210 05:28:54.170546  248270 main.go:143] libmachine: domain is now running
	I1210 05:28:54.170570  248270 main.go:143] libmachine: waiting for IP...
	I1210 05:28:54.171414  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:54.172058  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:54.172073  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:54.172400  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:54.172444  248270 retry.go:31] will retry after 204.150227ms: waiting for domain to come up
	I1210 05:28:54.378048  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:54.378807  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:54.378824  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:54.379142  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:54.379187  248270 retry.go:31] will retry after 336.586353ms: waiting for domain to come up
	I1210 05:28:54.717782  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:54.718612  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:54.718630  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:54.719044  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:54.719085  248270 retry.go:31] will retry after 427.236784ms: waiting for domain to come up
	I1210 05:28:55.147903  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:55.148695  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:55.148717  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:55.149130  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:55.149170  248270 retry.go:31] will retry after 496.970231ms: waiting for domain to come up
	I1210 05:28:55.648236  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:55.648976  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:55.648993  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:55.649385  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:55.649419  248270 retry.go:31] will retry after 685.299323ms: waiting for domain to come up
	I1210 05:28:56.336314  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:56.336946  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:56.336962  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:56.337319  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:56.337366  248270 retry.go:31] will retry after 806.287256ms: waiting for domain to come up
	I1210 05:28:57.145591  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:57.146271  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:57.146294  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:57.146653  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:57.146702  248270 retry.go:31] will retry after 821.107194ms: waiting for domain to come up
	I1210 05:28:57.969805  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:57.970505  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:57.970524  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:57.970852  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:57.970909  248270 retry.go:31] will retry after 1.109916147s: waiting for domain to come up
	I1210 05:28:59.082244  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:59.082858  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:59.082893  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:59.083281  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:59.083325  248270 retry.go:31] will retry after 1.728427418s: waiting for domain to come up
	I1210 05:29:00.814529  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:00.815344  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:29:00.815363  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:29:00.815773  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:29:00.815820  248270 retry.go:31] will retry after 1.517793987s: waiting for domain to come up
	I1210 05:29:02.335622  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:02.336400  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:29:02.336422  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:29:02.336895  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:29:02.336945  248270 retry.go:31] will retry after 2.6142192s: waiting for domain to come up
	I1210 05:29:04.954635  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:04.955354  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:29:04.955379  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:29:04.955714  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:29:04.955755  248270 retry.go:31] will retry after 2.739648926s: waiting for domain to come up
	I1210 05:29:07.696760  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:07.697527  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:29:07.697545  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:29:07.697920  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:29:07.697964  248270 retry.go:31] will retry after 2.936432251s: waiting for domain to come up
	I1210 05:29:10.638105  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.638831  248270 main.go:143] libmachine: domain addons-819501 has current primary IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.638865  248270 main.go:143] libmachine: found domain IP: 192.168.50.227
	I1210 05:29:10.638889  248270 main.go:143] libmachine: reserving static IP address...
	I1210 05:29:10.639331  248270 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-819501", mac: "52:54:00:0b:26:32", ip: "192.168.50.227"} in network mk-addons-819501
	I1210 05:29:10.837647  248270 main.go:143] libmachine: reserved static IP address 192.168.50.227 for domain addons-819501
	I1210 05:29:10.837674  248270 main.go:143] libmachine: waiting for SSH...
	I1210 05:29:10.837683  248270 main.go:143] libmachine: Getting to WaitForSSH function...
	I1210 05:29:10.841998  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.842734  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:10.842776  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.843052  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:10.843817  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:10.843851  248270 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1210 05:29:10.953140  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:29:10.953577  248270 main.go:143] libmachine: domain creation complete
	I1210 05:29:10.955178  248270 machine.go:94] provisionDockerMachine start ...
	I1210 05:29:10.957672  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.958111  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:10.958134  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.958334  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:10.958541  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:10.958552  248270 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:29:11.063469  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 05:29:11.063510  248270 buildroot.go:166] provisioning hostname "addons-819501"
	I1210 05:29:11.066851  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.067359  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.067386  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.067581  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:11.067818  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:11.067836  248270 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-819501 && echo "addons-819501" | sudo tee /etc/hostname
	I1210 05:29:11.191238  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-819501
	
	I1210 05:29:11.194283  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.194634  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.194662  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.194813  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:11.195030  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:11.195045  248270 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-819501' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-819501/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-819501' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:29:11.310332  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:29:11.310364  248270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22094-243461/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-243461/.minikube}
	I1210 05:29:11.310433  248270 buildroot.go:174] setting up certificates
	I1210 05:29:11.310448  248270 provision.go:84] configureAuth start
	I1210 05:29:11.314015  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.314505  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.314528  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.317045  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.317504  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.317533  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.317778  248270 provision.go:143] copyHostCerts
	I1210 05:29:11.317897  248270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/ca.pem (1082 bytes)
	I1210 05:29:11.318079  248270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/cert.pem (1123 bytes)
	I1210 05:29:11.318163  248270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/key.pem (1675 bytes)
	I1210 05:29:11.318221  248270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem org=jenkins.addons-819501 san=[127.0.0.1 192.168.50.227 addons-819501 localhost minikube]
	I1210 05:29:11.380449  248270 provision.go:177] copyRemoteCerts
	I1210 05:29:11.380516  248270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:29:11.383191  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.383530  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.383557  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.383724  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:11.468790  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 05:29:11.501764  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 05:29:11.536197  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:29:11.569192  248270 provision.go:87] duration metric: took 258.704158ms to configureAuth
	I1210 05:29:11.569224  248270 buildroot.go:189] setting minikube options for container-runtime
	I1210 05:29:11.569456  248270 config.go:182] Loaded profile config "addons-819501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:11.572768  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.573263  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.573289  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.573596  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:11.573815  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:11.573830  248270 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 05:29:11.833231  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 05:29:11.833265  248270 machine.go:97] duration metric: took 878.061601ms to provisionDockerMachine
	I1210 05:29:11.833278  248270 client.go:176] duration metric: took 19.640654056s to LocalClient.Create
	I1210 05:29:11.833288  248270 start.go:167] duration metric: took 19.640714044s to libmachine.API.Create "addons-819501"
	I1210 05:29:11.833300  248270 start.go:293] postStartSetup for "addons-819501" (driver="kvm2")
	I1210 05:29:11.833326  248270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:29:11.833399  248270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:29:11.836778  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.837269  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.837308  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.837481  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:11.922086  248270 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:29:11.927731  248270 info.go:137] Remote host: Buildroot 2025.02
	I1210 05:29:11.927773  248270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-243461/.minikube/addons for local assets ...
	I1210 05:29:11.927871  248270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-243461/.minikube/files for local assets ...
	I1210 05:29:11.927934  248270 start.go:296] duration metric: took 94.612566ms for postStartSetup
	I1210 05:29:11.931495  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.931980  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.932019  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.932307  248270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/config.json ...
	I1210 05:29:11.932540  248270 start.go:128] duration metric: took 19.74210366s to createHost
	I1210 05:29:11.934767  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.935144  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.935166  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.935324  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:11.935513  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:11.935522  248270 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 05:29:12.046287  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765344552.004969125
	
	I1210 05:29:12.046317  248270 fix.go:216] guest clock: 1765344552.004969125
	I1210 05:29:12.046328  248270 fix.go:229] Guest: 2025-12-10 05:29:12.004969125 +0000 UTC Remote: 2025-12-10 05:29:11.932556032 +0000 UTC m=+19.877288748 (delta=72.413093ms)
	I1210 05:29:12.046353  248270 fix.go:200] guest clock delta is within tolerance: 72.413093ms
	I1210 05:29:12.046359  248270 start.go:83] releasing machines lock for "addons-819501", held for 19.85604026s
	I1210 05:29:12.049360  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.049703  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:12.049730  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.050472  248270 ssh_runner.go:195] Run: cat /version.json
	I1210 05:29:12.050505  248270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:29:12.053634  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.054149  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:12.054174  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.054210  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.054370  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:12.054796  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:12.054838  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.055088  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:12.153950  248270 ssh_runner.go:195] Run: systemctl --version
	I1210 05:29:12.161170  248270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 05:29:12.329523  248270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:29:12.337761  248270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:29:12.337846  248270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:29:12.363822  248270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 05:29:12.363854  248270 start.go:496] detecting cgroup driver to use...
	I1210 05:29:12.363953  248270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:29:12.391660  248270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:29:12.411256  248270 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:29:12.411332  248270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:29:12.430231  248270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:29:12.447813  248270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:29:12.603440  248270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:29:12.827553  248270 docker.go:234] disabling docker service ...
	I1210 05:29:12.827647  248270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:29:12.846039  248270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:29:12.862361  248270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:29:13.020176  248270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:29:13.164368  248270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:29:13.182024  248270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:29:13.206545  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:13.357154  248270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 05:29:13.357230  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.371402  248270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 05:29:13.371473  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.385362  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.398751  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.412259  248270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:29:13.426396  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.440016  248270 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.462382  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.476454  248270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:29:13.487470  248270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 05:29:13.487559  248270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 05:29:13.511008  248270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:29:13.525764  248270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:13.668661  248270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 05:29:13.804341  248270 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 05:29:13.804461  248270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 05:29:13.811120  248270 start.go:564] Will wait 60s for crictl version
	I1210 05:29:13.811237  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:13.816221  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 05:29:13.855240  248270 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 05:29:13.855361  248270 ssh_runner.go:195] Run: crio --version
	I1210 05:29:13.886038  248270 ssh_runner.go:195] Run: crio --version
	I1210 05:29:13.919951  248270 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1210 05:29:13.923902  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:13.924339  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:13.924363  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:13.924587  248270 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1210 05:29:13.929723  248270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:29:13.945980  248270 kubeadm.go:884] updating cluster {Name:addons-819501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
3 ClusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.227 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:29:13.946170  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:14.110289  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:14.252203  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:14.408812  248270 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 05:29:14.408919  248270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:29:14.443222  248270 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1210 05:29:14.443255  248270 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 05:29:14.443321  248270 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:14.443333  248270 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.443375  248270 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.443403  248270 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.443402  248270 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 05:29:14.443464  248270 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.443462  248270 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.443481  248270 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.444803  248270 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.444813  248270 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.444829  248270 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:14.444808  248270 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.444831  248270 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.444830  248270 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.444830  248270 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 05:29:14.444820  248270 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.586669  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.587332  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.588817  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.594403  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.600393  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.610504  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 05:29:14.612226  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.715068  248270 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 05:29:14.715118  248270 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.715173  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.779435  248270 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 05:29:14.779487  248270 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.779435  248270 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 05:29:14.779538  248270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.779539  248270 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 05:29:14.779571  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.779582  248270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.779618  248270 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 05:29:14.779660  248270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.779708  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.779715  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.779550  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.787172  248270 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 05:29:14.787221  248270 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 05:29:14.787237  248270 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 05:29:14.787275  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.787288  248270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.787332  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.787336  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.791460  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.791483  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.791502  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.791543  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.866781  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:14.866836  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.866853  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.883432  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.892951  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.893016  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.912927  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.976361  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:14.984059  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:15.003284  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:15.029014  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:15.048649  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:15.048661  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:15.048727  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:15.116050  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:15.143539  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:15.143601  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1210 05:29:15.143736  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:15.173190  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3
	I1210 05:29:15.173206  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3
	I1210 05:29:15.173333  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:15.173334  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:15.188713  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1210 05:29:15.188872  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:15.194426  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3
	I1210 05:29:15.194565  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:15.218546  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1210 05:29:15.218551  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 05:29:15.218634  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 05:29:15.218700  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:15.238721  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3
	I1210 05:29:15.238751  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 05:29:15.238779  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 05:29:15.238854  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 05:29:15.238898  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 05:29:15.238941  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 05:29:15.238975  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 05:29:15.238991  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 05:29:15.238858  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:15.239079  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 05:29:15.244693  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 05:29:15.244738  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 05:29:15.336790  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 05:29:15.336841  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 05:29:15.374498  248270 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:15.374589  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:15.442120  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:15.859327  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1210 05:29:15.859390  248270 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:15.859423  248270 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 05:29:15.859450  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:15.859471  248270 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:15.859541  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:17.763703  248270 ssh_runner.go:235] Completed: which crictl: (1.904127102s)
	I1210 05:29:17.763747  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3: (1.904260399s)
	I1210 05:29:17.763776  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 from cache
	I1210 05:29:17.763799  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:17.763815  248270 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:17.763860  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:17.801244  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:19.427418  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3: (1.663530033s)
	I1210 05:29:19.427461  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 from cache
	I1210 05:29:19.427466  248270 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.626181448s)
	I1210 05:29:19.427490  248270 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:19.427547  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:19.427548  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:21.515976  248270 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.088322228s)
	I1210 05:29:21.516048  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 05:29:21.515979  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.088327761s)
	I1210 05:29:21.516139  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:21.516152  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1210 05:29:21.516199  248270 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:21.516255  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:21.521779  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 05:29:21.521829  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 05:29:23.716371  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (2.200087663s)
	I1210 05:29:23.716404  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1210 05:29:23.716440  248270 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:23.716492  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:25.782777  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3: (2.066253017s)
	I1210 05:29:25.782824  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 from cache
	I1210 05:29:25.782859  248270 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:25.782943  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:27.253133  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3: (1.47015767s)
	I1210 05:29:27.253186  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 from cache
	I1210 05:29:27.253222  248270 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:27.253296  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:27.900792  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 05:29:27.900866  248270 cache_images.go:125] Successfully loaded all cached images
	I1210 05:29:27.900893  248270 cache_images.go:94] duration metric: took 13.457620664s to LoadCachedImages
	I1210 05:29:27.900927  248270 kubeadm.go:935] updating node { 192.168.50.227 8443 v1.34.3 crio true true} ...
	I1210 05:29:27.901107  248270 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-819501 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:29:27.901270  248270 ssh_runner.go:195] Run: crio config
	I1210 05:29:27.952088  248270 cni.go:84] Creating CNI manager for ""
	I1210 05:29:27.952115  248270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:29:27.952136  248270 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:29:27.952158  248270 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.227 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-819501 NodeName:addons-819501 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:29:27.952294  248270 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-819501"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.227"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.227"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:29:27.952375  248270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 05:29:27.965806  248270 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 05:29:27.965903  248270 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 05:29:27.978340  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:27.978340  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 05:29:27.978345  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 05:29:27.978458  248270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:29:27.978469  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 05:29:27.978552  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 05:29:27.998011  248270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 05:29:27.998043  248270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 05:29:27.998018  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 05:29:27.998067  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 05:29:27.998069  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 05:29:28.014708  248270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 05:29:28.014787  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 05:29:28.819394  248270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:29:28.832094  248270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1210 05:29:28.854035  248270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 05:29:28.875757  248270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1210 05:29:28.897490  248270 ssh_runner.go:195] Run: grep 192.168.50.227	control-plane.minikube.internal$ /etc/hosts
	I1210 05:29:28.902042  248270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:29:28.918543  248270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:29.065436  248270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:29:29.106997  248270 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501 for IP: 192.168.50.227
	I1210 05:29:29.107026  248270 certs.go:195] generating shared ca certs ...
	I1210 05:29:29.107047  248270 certs.go:227] acquiring lock for ca certs: {Name:mk2c8c8bbc628186be8cfd9c613269482a34a061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.107244  248270 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key
	I1210 05:29:29.260185  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt ...
	I1210 05:29:29.260226  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt: {Name:mk7e3ea493469b63ffe73a3fd5c0aebe67cc96c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.260418  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key ...
	I1210 05:29:29.260430  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key: {Name:mk18d206c401766c525db7646d9b50127ae5a4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.260509  248270 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key
	I1210 05:29:29.303788  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt ...
	I1210 05:29:29.303818  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt: {Name:mk93a704a340d2989dfaa2c6ae18dd0ded5b740c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.304005  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key ...
	I1210 05:29:29.304017  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key: {Name:mk52708f456900179c4e21317e6ee01f1f662a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.304092  248270 certs.go:257] generating profile certs ...
	I1210 05:29:29.304158  248270 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.key
	I1210 05:29:29.304173  248270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt with IP's: []
	I1210 05:29:29.373028  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt ...
	I1210 05:29:29.373060  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: {Name:mkacd1d17bdb9699db5acb0deccedf4b963e9627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.373247  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.key ...
	I1210 05:29:29.373259  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.key: {Name:mkc8bb793a8aba8601b09fb6b4c6b561546e1716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.373344  248270 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key.b2effa21
	I1210 05:29:29.373366  248270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt.b2effa21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.227]
	I1210 05:29:29.412783  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt.b2effa21 ...
	I1210 05:29:29.412819  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt.b2effa21: {Name:mkbecca211b22f296a63bf12c0f8d6348e074d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.413027  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key.b2effa21 ...
	I1210 05:29:29.413042  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key.b2effa21: {Name:mkd21c7f030b36e5b0f136cec809fcc4792c4753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.413124  248270 certs.go:382] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt.b2effa21 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt
	I1210 05:29:29.413195  248270 certs.go:386] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key.b2effa21 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key
	I1210 05:29:29.413246  248270 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.key
	I1210 05:29:29.413264  248270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.crt with IP's: []
	I1210 05:29:29.588512  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.crt ...
	I1210 05:29:29.588545  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.crt: {Name:mkbfd2473bf6ad2df18575d3c1713540ff713d7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.588726  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.key ...
	I1210 05:29:29.588740  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.key: {Name:mk5d55c2451593ca28ccc38ada487efa06a43ea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.588942  248270 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:29:29.588986  248270 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem (1082 bytes)
	I1210 05:29:29.589013  248270 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:29:29.589037  248270 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem (1675 bytes)
	I1210 05:29:29.589603  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:29:29.623229  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:29:29.655622  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:29:29.688080  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 05:29:29.719938  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 05:29:29.752646  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 05:29:29.787596  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:29:29.822559  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 05:29:29.861697  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:29:29.893727  248270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:29:29.916020  248270 ssh_runner.go:195] Run: openssl version
	I1210 05:29:29.923087  248270 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:29.937161  248270 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:29:29.950819  248270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:29.956621  248270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:29.956683  248270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:29.964340  248270 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:29:29.977116  248270 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 05:29:29.989798  248270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:29:29.994829  248270 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 05:29:29.994916  248270 kubeadm.go:401] StartCluster: {Name:addons-819501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 C
lusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.227 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:29:29.995012  248270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:29:29.995077  248270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:29:30.033996  248270 cri.go:89] found id: ""
	I1210 05:29:30.034077  248270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:29:30.047749  248270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:29:30.061245  248270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:29:30.075038  248270 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:29:30.075063  248270 kubeadm.go:158] found existing configuration files:
	
	I1210 05:29:30.075128  248270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 05:29:30.087377  248270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:29:30.087446  248270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:29:30.100015  248270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 05:29:30.112415  248270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:29:30.112501  248270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:29:30.125599  248270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 05:29:30.137858  248270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:29:30.137955  248270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:29:30.150895  248270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 05:29:30.162780  248270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:29:30.162853  248270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:29:30.175318  248270 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 05:29:30.340910  248270 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 05:29:42.405330  248270 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 05:29:42.405415  248270 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:29:42.405523  248270 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:29:42.405657  248270 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:29:42.405780  248270 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:29:42.405921  248270 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:29:42.407664  248270 out.go:252]   - Generating certificates and keys ...
	I1210 05:29:42.407781  248270 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:29:42.407857  248270 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:29:42.407979  248270 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 05:29:42.408061  248270 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 05:29:42.408157  248270 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 05:29:42.408230  248270 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 05:29:42.408313  248270 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 05:29:42.408455  248270 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-819501 localhost] and IPs [192.168.50.227 127.0.0.1 ::1]
	I1210 05:29:42.408531  248270 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 05:29:42.408656  248270 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-819501 localhost] and IPs [192.168.50.227 127.0.0.1 ::1]
	I1210 05:29:42.408732  248270 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 05:29:42.408789  248270 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 05:29:42.408829  248270 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 05:29:42.408896  248270 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:29:42.408941  248270 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:29:42.409022  248270 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:29:42.409106  248270 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:29:42.409190  248270 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:29:42.409293  248270 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:29:42.409407  248270 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:29:42.409507  248270 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:29:42.411069  248270 out.go:252]   - Booting up control plane ...
	I1210 05:29:42.411187  248270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:29:42.411282  248270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:29:42.411391  248270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:29:42.411501  248270 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:29:42.411592  248270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:29:42.411684  248270 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:29:42.411786  248270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:29:42.411843  248270 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:29:42.412015  248270 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:29:42.412159  248270 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 05:29:42.412277  248270 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002203089s
	I1210 05:29:42.412416  248270 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 05:29:42.412493  248270 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.50.227:8443/livez
	I1210 05:29:42.412581  248270 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 05:29:42.412660  248270 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 05:29:42.412738  248270 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.103693537s
	I1210 05:29:42.412795  248270 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.793585918s
	I1210 05:29:42.412851  248270 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001380192s
	I1210 05:29:42.412969  248270 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 05:29:42.413101  248270 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 05:29:42.413187  248270 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 05:29:42.413353  248270 kubeadm.go:319] [mark-control-plane] Marking the node addons-819501 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 05:29:42.413409  248270 kubeadm.go:319] [bootstrap-token] Using token: ifaxfb.g6s3du0ko87s83xe
	I1210 05:29:42.415656  248270 out.go:252]   - Configuring RBAC rules ...
	I1210 05:29:42.415753  248270 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 05:29:42.415838  248270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 05:29:42.415978  248270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 05:29:42.416146  248270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 05:29:42.416292  248270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 05:29:42.416410  248270 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 05:29:42.416562  248270 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 05:29:42.416613  248270 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 05:29:42.416653  248270 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 05:29:42.416659  248270 kubeadm.go:319] 
	I1210 05:29:42.416721  248270 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 05:29:42.416730  248270 kubeadm.go:319] 
	I1210 05:29:42.416794  248270 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 05:29:42.416799  248270 kubeadm.go:319] 
	I1210 05:29:42.416820  248270 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 05:29:42.416871  248270 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 05:29:42.416929  248270 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 05:29:42.416935  248270 kubeadm.go:319] 
	I1210 05:29:42.416983  248270 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 05:29:42.416988  248270 kubeadm.go:319] 
	I1210 05:29:42.417031  248270 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 05:29:42.417039  248270 kubeadm.go:319] 
	I1210 05:29:42.417087  248270 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 05:29:42.417155  248270 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 05:29:42.417214  248270 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 05:29:42.417227  248270 kubeadm.go:319] 
	I1210 05:29:42.417300  248270 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 05:29:42.417370  248270 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 05:29:42.417376  248270 kubeadm.go:319] 
	I1210 05:29:42.417457  248270 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ifaxfb.g6s3du0ko87s83xe \
	I1210 05:29:42.417548  248270 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:3fdb6ac7c26d35942b0591aa9d148ccb3b5b969098be653ce61486bf954c5d36 \
	I1210 05:29:42.417568  248270 kubeadm.go:319] 	--control-plane 
	I1210 05:29:42.417576  248270 kubeadm.go:319] 
	I1210 05:29:42.417649  248270 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 05:29:42.417655  248270 kubeadm.go:319] 
	I1210 05:29:42.417725  248270 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ifaxfb.g6s3du0ko87s83xe \
	I1210 05:29:42.417846  248270 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:3fdb6ac7c26d35942b0591aa9d148ccb3b5b969098be653ce61486bf954c5d36 
	I1210 05:29:42.417859  248270 cni.go:84] Creating CNI manager for ""
	I1210 05:29:42.417870  248270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:29:42.419498  248270 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 05:29:42.420865  248270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 05:29:42.435368  248270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 05:29:42.463419  248270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 05:29:42.463507  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:42.463555  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-819501 minikube.k8s.io/updated_at=2025_12_10T05_29_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=addons-819501 minikube.k8s.io/primary=true
	I1210 05:29:42.517462  248270 ops.go:34] apiserver oom_adj: -16
	I1210 05:29:42.645586  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:43.145896  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:43.646263  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:44.146506  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:44.646447  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:45.146404  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:45.646503  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:46.146345  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:46.645679  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:46.765786  248270 kubeadm.go:1114] duration metric: took 4.302351478s to wait for elevateKubeSystemPrivileges
	I1210 05:29:46.765837  248270 kubeadm.go:403] duration metric: took 16.770933871s to StartCluster
	I1210 05:29:46.765872  248270 settings.go:142] acquiring lock: {Name:mkfd19ecbf4d1e6319f3bb5fd2369931dc469304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:46.766077  248270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 05:29:46.766575  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/kubeconfig: {Name:mk89e62df614d075d4d9ba9b9215d18e6c14ff0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:46.766803  248270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 05:29:46.766812  248270 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.227 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:29:46.766895  248270 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 05:29:46.767036  248270 config.go:182] Loaded profile config "addons-819501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:46.767055  248270 addons.go:70] Setting yakd=true in profile "addons-819501"
	I1210 05:29:46.767080  248270 addons.go:70] Setting cloud-spanner=true in profile "addons-819501"
	I1210 05:29:46.767080  248270 addons.go:70] Setting default-storageclass=true in profile "addons-819501"
	I1210 05:29:46.767094  248270 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-819501"
	I1210 05:29:46.767101  248270 addons.go:239] Setting addon cloud-spanner=true in "addons-819501"
	I1210 05:29:46.767102  248270 addons.go:239] Setting addon yakd=true in "addons-819501"
	I1210 05:29:46.767110  248270 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-819501"
	I1210 05:29:46.767110  248270 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-819501"
	I1210 05:29:46.767136  248270 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-819501"
	I1210 05:29:46.767140  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767144  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767144  248270 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-819501"
	I1210 05:29:46.767153  248270 addons.go:70] Setting gcp-auth=true in profile "addons-819501"
	I1210 05:29:46.767165  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767173  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767110  248270 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-819501"
	I1210 05:29:46.767198  248270 addons.go:70] Setting inspektor-gadget=true in profile "addons-819501"
	I1210 05:29:46.767209  248270 addons.go:239] Setting addon inspektor-gadget=true in "addons-819501"
	I1210 05:29:46.767208  248270 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-819501"
	I1210 05:29:46.767236  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767251  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.768008  248270 addons.go:70] Setting metrics-server=true in profile "addons-819501"
	I1210 05:29:46.768032  248270 addons.go:239] Setting addon metrics-server=true in "addons-819501"
	I1210 05:29:46.768064  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767182  248270 addons.go:70] Setting ingress=true in profile "addons-819501"
	I1210 05:29:46.768104  248270 addons.go:239] Setting addon ingress=true in "addons-819501"
	I1210 05:29:46.768148  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767190  248270 addons.go:70] Setting ingress-dns=true in profile "addons-819501"
	I1210 05:29:46.768193  248270 addons.go:239] Setting addon ingress-dns=true in "addons-819501"
	I1210 05:29:46.768232  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.768313  248270 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-819501"
	I1210 05:29:46.768289  248270 addons.go:70] Setting storage-provisioner=true in profile "addons-819501"
	I1210 05:29:46.768342  248270 addons.go:70] Setting volcano=true in profile "addons-819501"
	I1210 05:29:46.768349  248270 addons.go:239] Setting addon storage-provisioner=true in "addons-819501"
	I1210 05:29:46.768354  248270 addons.go:239] Setting addon volcano=true in "addons-819501"
	I1210 05:29:46.768375  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.768380  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767174  248270 mustload.go:66] Loading cluster: addons-819501
	I1210 05:29:46.768847  248270 addons.go:70] Setting registry=true in profile "addons-819501"
	I1210 05:29:46.768871  248270 addons.go:239] Setting addon registry=true in "addons-819501"
	I1210 05:29:46.768917  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.769020  248270 config.go:182] Loaded profile config "addons-819501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:46.769085  248270 addons.go:70] Setting registry-creds=true in profile "addons-819501"
	I1210 05:29:46.769091  248270 out.go:179] * Verifying Kubernetes components...
	I1210 05:29:46.769102  248270 addons.go:239] Setting addon registry-creds=true in "addons-819501"
	I1210 05:29:46.769133  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.769444  248270 addons.go:70] Setting volumesnapshots=true in profile "addons-819501"
	I1210 05:29:46.769469  248270 addons.go:239] Setting addon volumesnapshots=true in "addons-819501"
	I1210 05:29:46.769500  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.768333  248270 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-819501"
	I1210 05:29:46.771211  248270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:46.775133  248270 addons.go:239] Setting addon default-storageclass=true in "addons-819501"
	I1210 05:29:46.775185  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.775483  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 05:29:46.775493  248270 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 05:29:46.775702  248270 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 05:29:46.775491  248270 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 05:29:46.776909  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 05:29:46.776932  248270 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 05:29:46.776946  248270 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	W1210 05:29:46.777456  248270 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 05:29:46.777012  248270 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:29:46.777568  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 05:29:46.777745  248270 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 05:29:46.777753  248270 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 05:29:46.777795  248270 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:29:46.777807  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 05:29:46.778720  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 05:29:46.778753  248270 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 05:29:46.779205  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 05:29:46.778868  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.779621  248270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:29:46.779626  248270 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 05:29:46.779644  248270 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 05:29:46.779699  248270 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:29:46.779717  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 05:29:46.779802  248270 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 05:29:46.779812  248270 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:46.781021  248270 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-819501"
	I1210 05:29:46.781066  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.781599  248270 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:29:46.781619  248270 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:29:46.781927  248270 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 05:29:46.781979  248270 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 05:29:46.782811  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 05:29:46.782848  248270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:29:46.783287  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:29:46.782853  248270 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:29:46.783372  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 05:29:46.783764  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 05:29:46.783783  248270 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:29:46.784144  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 05:29:46.784493  248270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 05:29:46.784507  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 05:29:46.784990  248270 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 05:29:46.785462  248270 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 05:29:46.787159  248270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:29:46.787165  248270 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 05:29:46.787232  248270 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 05:29:46.787240  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 05:29:46.787412  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 05:29:46.787542  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.787581  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.787826  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.788403  248270 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:29:46.788672  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 05:29:46.789422  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.789691  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.789726  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.789850  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.789870  248270 out.go:179]   - Using image docker.io/busybox:stable
	I1210 05:29:46.789904  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.789986  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.790377  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.790436  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.790604  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.790708  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.790929  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 05:29:46.791336  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.791379  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.791545  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.791579  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.791918  248270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:29:46.791944  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 05:29:46.792221  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.792333  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.792335  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.792373  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.792580  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.792613  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.793171  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.793390  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.793737  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 05:29:46.793814  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.793846  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.794407  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.794632  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.795534  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.795729  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.795767  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.795997  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.796033  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.796253  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.796350  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.796260  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 05:29:46.796383  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.796916  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.796960  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.796989  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.797284  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.797288  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.797322  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.797369  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.797592  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.798121  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.798164  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.798338  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.798405  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.798805  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.798850  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.798934  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.798992  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 05:29:46.799112  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.799331  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.799362  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.799584  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.800274  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 05:29:46.800293  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 05:29:46.802692  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.803162  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.803185  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.803366  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	W1210 05:29:47.001763  248270 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.50.1:60794->192.168.50.227:22: read: connection reset by peer
	I1210 05:29:47.001799  248270 retry.go:31] will retry after 305.783852ms: ssh: handshake failed: read tcp 192.168.50.1:60794->192.168.50.227:22: read: connection reset by peer
	W1210 05:29:47.014988  248270 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.50.1:60818->192.168.50.227:22: read: connection reset by peer
	I1210 05:29:47.015023  248270 retry.go:31] will retry after 221.795568ms: ssh: handshake failed: read tcp 192.168.50.1:60818->192.168.50.227:22: read: connection reset by peer
	I1210 05:29:47.174748  248270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:29:47.174750  248270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 05:29:47.407045  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:29:47.432282  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 05:29:47.432309  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 05:29:47.482855  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:29:47.501562  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:29:47.503279  248270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 05:29:47.503299  248270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 05:29:47.509563  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:29:47.515555  248270 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 05:29:47.515582  248270 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 05:29:47.525606  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:29:47.562132  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 05:29:47.566586  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:29:47.617239  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 05:29:47.617273  248270 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 05:29:47.645948  248270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 05:29:47.645980  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 05:29:47.758438  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:29:47.910234  248270 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:29:47.910257  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 05:29:47.920337  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 05:29:47.920367  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 05:29:48.015067  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:29:48.027823  248270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 05:29:48.027852  248270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 05:29:48.067181  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 05:29:48.067220  248270 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 05:29:48.292618  248270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 05:29:48.292654  248270 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 05:29:48.352705  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:29:48.609223  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 05:29:48.609250  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 05:29:48.685554  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:29:48.755069  248270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 05:29:48.755098  248270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 05:29:48.842106  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 05:29:48.842167  248270 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 05:29:48.875769  248270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:29:48.875798  248270 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 05:29:49.413506  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:29:49.413534  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 05:29:49.466897  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 05:29:49.466930  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 05:29:49.622705  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 05:29:49.622739  248270 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 05:29:49.796096  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:29:50.219307  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 05:29:50.219336  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 05:29:50.219351  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:29:50.321293  248270 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:50.321319  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 05:29:50.537459  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 05:29:50.537499  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 05:29:50.716098  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:51.041250  248270 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.866447078s)
	I1210 05:29:51.041309  248270 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.866465642s)
	I1210 05:29:51.041340  248270 start.go:977] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1210 05:29:51.042044  248270 node_ready.go:35] waiting up to 6m0s for node "addons-819501" to be "Ready" ...
	I1210 05:29:51.049135  248270 node_ready.go:49] node "addons-819501" is "Ready"
	I1210 05:29:51.049170  248270 node_ready.go:38] duration metric: took 7.101622ms for node "addons-819501" to be "Ready" ...
	I1210 05:29:51.049187  248270 api_server.go:52] waiting for apiserver process to appear ...
	I1210 05:29:51.049251  248270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:29:51.068361  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 05:29:51.068386  248270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 05:29:51.554068  248270 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-819501" context rescaled to 1 replicas
	I1210 05:29:51.613448  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 05:29:51.613477  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 05:29:52.107019  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 05:29:52.107058  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 05:29:52.549779  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:29:52.549811  248270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 05:29:53.265677  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:29:54.221671  248270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 05:29:54.225457  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:54.225987  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:54.226021  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:54.226211  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:55.099859  248270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 05:29:55.502306  248270 addons.go:239] Setting addon gcp-auth=true in "addons-819501"
	I1210 05:29:55.502381  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:55.504619  248270 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 05:29:55.507749  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:55.508396  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:55.508441  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:55.508753  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:55.727192  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.320101368s)
	I1210 05:29:55.727260  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.244355162s)
	I1210 05:29:55.727286  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.217695624s)
	I1210 05:29:55.727349  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.225763647s)
	I1210 05:29:55.727464  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.201812774s)
	I1210 05:29:55.727514  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.165347037s)
	I1210 05:29:55.727599  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.160988535s)
	I1210 05:29:55.727655  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.969192004s)
	W1210 05:29:55.895228  248270 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1210 05:29:58.186837  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.834088583s)
	I1210 05:29:58.186914  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.501321346s)
	I1210 05:29:58.186956  248270 addons.go:495] Verifying addon registry=true in "addons-819501"
	I1210 05:29:58.187022  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.39087132s)
	I1210 05:29:58.187082  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.96769279s)
	I1210 05:29:58.187047  248270 addons.go:495] Verifying addon metrics-server=true in "addons-819501"
	I1210 05:29:58.187133  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.172020307s)
	I1210 05:29:58.187195  248270 addons.go:495] Verifying addon ingress=true in "addons-819501"
	I1210 05:29:58.188701  248270 out.go:179] * Verifying registry addon...
	I1210 05:29:58.188716  248270 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-819501 service yakd-dashboard -n yakd-dashboard
	
	I1210 05:29:58.189735  248270 out.go:179] * Verifying ingress addon...
	I1210 05:29:58.191374  248270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 05:29:58.192560  248270 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 05:29:58.348103  248270 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 05:29:58.348137  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:58.367929  248270 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 05:29:58.367966  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:58.542091  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.825930838s)
	I1210 05:29:58.542168  248270 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.492891801s)
	W1210 05:29:58.542182  248270 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:29:58.542204  248270 api_server.go:72] duration metric: took 11.775367493s to wait for apiserver process to appear ...
	I1210 05:29:58.542216  248270 api_server.go:88] waiting for apiserver healthz status ...
	I1210 05:29:58.542242  248270 api_server.go:253] Checking apiserver healthz at https://192.168.50.227:8443/healthz ...
	I1210 05:29:58.542243  248270 retry.go:31] will retry after 174.698732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:29:58.565722  248270 api_server.go:279] https://192.168.50.227:8443/healthz returned 200:
	ok
	I1210 05:29:58.585143  248270 api_server.go:141] control plane version: v1.34.3
	I1210 05:29:58.585187  248270 api_server.go:131] duration metric: took 42.962592ms to wait for apiserver health ...
	I1210 05:29:58.585201  248270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 05:29:58.669908  248270 system_pods.go:59] 16 kube-system pods found
	I1210 05:29:58.669957  248270 system_pods.go:61] "amd-gpu-device-plugin-xwmk6" [ca338a7b-5d2c-4894-a615-0224cddd49ff] Running
	I1210 05:29:58.669977  248270 system_pods.go:61] "coredns-66bc5c9577-h4zx9" [1a4ca1fc-ccd8-40e7-86e6-ec486935adac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:58.669984  248270 system_pods.go:61] "coredns-66bc5c9577-lwtl7" [d84b8912-1587-45b3-956c-791ea7ec71c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:58.669992  248270 system_pods.go:61] "etcd-addons-819501" [82bf46a7-17c9-462a-96f2-ff9578d2f44b] Running
	I1210 05:29:58.670000  248270 system_pods.go:61] "kube-apiserver-addons-819501" [0925d6e7-492c-4cb7-947a-1540b753d464] Running
	I1210 05:29:58.670006  248270 system_pods.go:61] "kube-controller-manager-addons-819501" [7f7275d9-f455-4620-9240-435ef4487f90] Running
	I1210 05:29:58.670017  248270 system_pods.go:61] "kube-ingress-dns-minikube" [056ca6ed-0cab-42f7-bffb-24f0785fd003] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:58.670027  248270 system_pods.go:61] "kube-proxy-ngpzv" [75c58eba-0463-42c9-a9d6-3c579349bd49] Running
	I1210 05:29:58.670033  248270 system_pods.go:61] "kube-scheduler-addons-819501" [8c2cc920-2a69-4923-8222-c7affed57f02] Running
	I1210 05:29:58.670041  248270 system_pods.go:61] "metrics-server-85b7d694d7-bqdmn" [6439b312-6541-4ed0-94d7-900f65d427bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:58.670051  248270 system_pods.go:61] "nvidia-device-plugin-daemonset-dztkj" [1272b6dc-2104-4d64-9673-e03010d430b4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:58.670060  248270 system_pods.go:61] "registry-6b586f9694-lkhvn" [0a8387d7-19c7-49cd-8425-48c60f2e70ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:58.670076  248270 system_pods.go:61] "registry-creds-764b6fb674-fw65b" [21819916-d847-4fcb-8cd9-d14d7cb387fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:58.670084  248270 system_pods.go:61] "registry-proxy-25pr7" [945db864-8d9f-4e37-b866-28b9f77d42c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:58.670094  248270 system_pods.go:61] "snapshot-controller-7d9fbc56b8-m92vv" [d92b01be-d84c-4f63-88c1-ed58fc9236a3] Pending
	I1210 05:29:58.670101  248270 system_pods.go:61] "storage-provisioner" [76ed88a2-563b-4ee6-9a9a-94669a45bd2a] Running
	I1210 05:29:58.670110  248270 system_pods.go:74] duration metric: took 84.901558ms to wait for pod list to return data ...
	I1210 05:29:58.670120  248270 default_sa.go:34] waiting for default service account to be created ...
	I1210 05:29:58.717796  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:58.755265  248270 default_sa.go:45] found service account: "default"
	I1210 05:29:58.755306  248270 default_sa.go:55] duration metric: took 85.176789ms for default service account to be created ...
	I1210 05:29:58.755322  248270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 05:29:58.837383  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:58.837387  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:58.837931  248270 system_pods.go:86] 17 kube-system pods found
	I1210 05:29:58.837967  248270 system_pods.go:89] "amd-gpu-device-plugin-xwmk6" [ca338a7b-5d2c-4894-a615-0224cddd49ff] Running
	I1210 05:29:58.837983  248270 system_pods.go:89] "coredns-66bc5c9577-h4zx9" [1a4ca1fc-ccd8-40e7-86e6-ec486935adac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:58.838000  248270 system_pods.go:89] "coredns-66bc5c9577-lwtl7" [d84b8912-1587-45b3-956c-791ea7ec71c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:58.838007  248270 system_pods.go:89] "etcd-addons-819501" [82bf46a7-17c9-462a-96f2-ff9578d2f44b] Running
	I1210 05:29:58.838018  248270 system_pods.go:89] "kube-apiserver-addons-819501" [0925d6e7-492c-4cb7-947a-1540b753d464] Running
	I1210 05:29:58.838025  248270 system_pods.go:89] "kube-controller-manager-addons-819501" [7f7275d9-f455-4620-9240-435ef4487f90] Running
	I1210 05:29:58.838036  248270 system_pods.go:89] "kube-ingress-dns-minikube" [056ca6ed-0cab-42f7-bffb-24f0785fd003] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:58.838043  248270 system_pods.go:89] "kube-proxy-ngpzv" [75c58eba-0463-42c9-a9d6-3c579349bd49] Running
	I1210 05:29:58.838049  248270 system_pods.go:89] "kube-scheduler-addons-819501" [8c2cc920-2a69-4923-8222-c7affed57f02] Running
	I1210 05:29:58.838060  248270 system_pods.go:89] "metrics-server-85b7d694d7-bqdmn" [6439b312-6541-4ed0-94d7-900f65d427bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:58.838075  248270 system_pods.go:89] "nvidia-device-plugin-daemonset-dztkj" [1272b6dc-2104-4d64-9673-e03010d430b4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:58.838101  248270 system_pods.go:89] "registry-6b586f9694-lkhvn" [0a8387d7-19c7-49cd-8425-48c60f2e70ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:58.838115  248270 system_pods.go:89] "registry-creds-764b6fb674-fw65b" [21819916-d847-4fcb-8cd9-d14d7cb387fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:58.838123  248270 system_pods.go:89] "registry-proxy-25pr7" [945db864-8d9f-4e37-b866-28b9f77d42c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:58.838130  248270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m92vv" [d92b01be-d84c-4f63-88c1-ed58fc9236a3] Pending
	I1210 05:29:58.838137  248270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xhmx9" [f17bb4a5-df22-4e1c-a6dd-a37a43712cbb] Pending
	I1210 05:29:58.838143  248270 system_pods.go:89] "storage-provisioner" [76ed88a2-563b-4ee6-9a9a-94669a45bd2a] Running
	I1210 05:29:58.838154  248270 system_pods.go:126] duration metric: took 82.823961ms to wait for k8s-apps to be running ...
	I1210 05:29:58.838177  248270 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 05:29:58.838240  248270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:29:59.216996  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:59.217048  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:59.760212  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:59.799267  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:00.028379  248270 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.5237158s)
	I1210 05:30:00.030605  248270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:30:00.031844  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.766107934s)
	I1210 05:30:00.031919  248270 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-819501"
	I1210 05:30:00.033501  248270 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 05:30:00.033501  248270 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 05:30:00.035389  248270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 05:30:00.035424  248270 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 05:30:00.036495  248270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 05:30:00.092418  248270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 05:30:00.092524  248270 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 05:30:00.099191  248270 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 05:30:00.099218  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:00.154497  248270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:30:00.154523  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 05:30:00.218466  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:00.218476  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:00.239458  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:30:00.551381  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:00.700051  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:00.700489  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:01.046588  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:01.060958  248270 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.222683048s)
	I1210 05:30:01.060998  248270 system_svc.go:56] duration metric: took 2.222816606s WaitForService to wait for kubelet
	I1210 05:30:01.061010  248270 kubeadm.go:587] duration metric: took 14.294174339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:30:01.061035  248270 node_conditions.go:102] verifying NodePressure condition ...
	I1210 05:30:01.060959  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.343106707s)
	I1210 05:30:01.067487  248270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 05:30:01.067520  248270 node_conditions.go:123] node cpu capacity is 2
	I1210 05:30:01.067536  248270 node_conditions.go:105] duration metric: took 6.493768ms to run NodePressure ...
	I1210 05:30:01.067549  248270 start.go:242] waiting for startup goroutines ...
	I1210 05:30:01.200588  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:01.203833  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:01.575049  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:01.783678  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:01.820709  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:01.901619  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.662111316s)
	I1210 05:30:01.903054  248270 addons.go:495] Verifying addon gcp-auth=true in "addons-819501"
	I1210 05:30:01.905447  248270 out.go:179] * Verifying gcp-auth addon...
	I1210 05:30:01.908030  248270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 05:30:01.971590  248270 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 05:30:01.971620  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:02.099231  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:02.211381  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:02.211475  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:02.423901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:02.544501  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:02.700413  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:02.702741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:02.917043  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:03.043724  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:03.195997  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:03.200750  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:03.422696  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:03.542204  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:03.699053  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:03.702004  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:03.913811  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:04.042408  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:04.197038  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:04.197194  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:04.414256  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:04.544503  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:04.696289  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:04.699139  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:04.912192  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:05.043926  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:05.197317  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:05.198154  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:05.413841  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:05.542630  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:05.836234  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:05.837463  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:05.912581  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:06.041785  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:06.197071  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:06.198021  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:06.412100  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:06.541405  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:06.696562  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:06.697300  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:06.912034  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:07.040758  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:07.195563  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:07.196426  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:07.414799  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:07.541759  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:07.699852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:07.700171  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:07.913279  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:08.042406  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:08.199694  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:08.200267  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:08.413210  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:08.541549  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:08.694572  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:08.700820  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:08.913565  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:09.043130  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:09.199805  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:09.200384  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:09.413468  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:09.738431  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:09.743709  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:09.744006  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:09.913178  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:10.045294  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:10.201112  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:10.201536  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:10.412804  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:10.543961  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:10.700658  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:10.702942  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:10.913710  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:11.042129  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:11.198908  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:11.204061  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:11.412990  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:11.540719  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:11.701614  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:11.702763  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:11.914546  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:12.042555  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:12.197852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:12.198653  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:12.417360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:12.542814  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:12.697802  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:12.699425  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:12.913723  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:13.040006  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:13.195864  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:13.199933  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:13.418096  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:13.543369  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:13.699360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:13.699489  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:13.912674  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:14.040435  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:14.197368  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:14.198434  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:14.413640  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:14.540394  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:14.696663  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:14.699389  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:14.915541  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:15.429953  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:15.433247  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:15.435508  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:15.435521  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:15.540749  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:15.699596  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:15.700467  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:15.916459  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:16.041580  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:16.195018  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:16.197575  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:16.412219  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:16.546078  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:16.718549  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:16.718656  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:16.912761  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:17.049720  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:17.199564  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:17.199795  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:17.416037  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:17.544532  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:17.699384  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:17.700731  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:17.945756  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:18.041320  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:18.200647  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:18.200899  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:18.413830  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:18.546581  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:18.697003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:18.702274  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:18.912631  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:19.043237  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:19.196644  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:19.197045  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:19.412567  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:19.540612  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:19.695730  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:19.698016  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:19.913692  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:20.045701  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:20.197249  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:20.197641  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:20.413847  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:20.542656  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:20.698818  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:20.704499  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:20.918024  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:21.042453  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:21.197201  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:21.203709  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:21.415612  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:21.547180  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:21.697089  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:21.698183  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:21.913362  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:22.048347  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:22.195852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:22.198596  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:22.414126  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:22.541638  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:22.695242  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:22.698366  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:23.021289  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:23.042294  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:23.198290  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:23.199072  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:23.414047  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:23.554644  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:23.701970  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:23.705112  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:23.913583  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:24.061821  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:24.199309  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:24.203209  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:24.418670  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:24.541305  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:24.696346  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:24.700118  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:24.915306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:25.053528  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:25.196902  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:25.197440  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:25.413854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:25.542391  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:25.701623  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:25.704483  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:25.911945  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:26.046126  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:26.212011  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:26.214603  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:26.412412  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:26.543055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:26.696033  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:26.699135  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:26.911772  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:27.040667  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:27.195375  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:27.197664  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:27.412483  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:27.541678  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:27.700619  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:27.701393  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:27.912791  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:28.040728  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:28.198661  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:28.201110  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:28.416573  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:28.904196  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:28.904398  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:28.904575  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:28.913235  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:29.045841  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:29.200545  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:29.203445  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:29.411788  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:29.542883  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:29.699400  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:29.701573  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:29.913680  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:30.040168  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:30.197412  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:30.202099  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:30.413962  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:30.542075  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:30.697823  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:30.699758  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:30.933743  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:31.041743  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:31.200789  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:31.202078  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:31.411894  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:31.543354  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:31.696082  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:31.697635  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:31.915844  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:32.042712  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:32.195680  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:32.196329  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:32.413623  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:32.541402  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:32.697475  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:32.700593  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:32.914887  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:33.043306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:33.197100  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:33.199780  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:33.412378  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:33.542047  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:33.696129  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:33.696893  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:33.912809  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:34.040753  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:34.195477  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:34.196560  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:34.417130  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:34.543287  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:34.702104  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:34.702245  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:34.912016  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:35.040624  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:35.196109  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:35.196529  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:35.411973  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:35.540615  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:35.697044  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:35.697719  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:35.913698  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:36.040429  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:36.195759  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:36.196067  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:36.413418  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:36.541807  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:36.698339  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:36.698627  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:36.912470  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:37.042035  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:37.195635  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:37.195732  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:37.412529  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:37.541462  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:37.696355  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:37.696358  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:37.911987  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:38.040986  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:38.195127  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:38.197401  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:38.412456  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:38.541008  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:38.696652  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:38.696855  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:38.912382  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:39.040953  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:39.195628  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:39.197046  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:39.411281  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:39.541262  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:39.696395  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:39.696643  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:39.912042  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:40.041169  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:40.195226  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:40.196941  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:40.413324  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:40.540517  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:40.695280  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:40.697540  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:40.912944  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:41.041169  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:41.196610  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:41.197062  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:41.411344  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:41.541121  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:41.697442  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:41.697602  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:41.912587  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:42.040852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:42.195549  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:42.196962  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:42.412892  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:42.540835  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:42.695893  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:42.697068  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:42.912138  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:43.041313  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:43.197278  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:43.197476  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:43.412390  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:43.541382  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:43.695572  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:43.696270  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:43.911719  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:44.040923  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:44.199247  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:44.200272  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:44.412362  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:44.543558  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:44.695243  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:44.696776  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:44.912579  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:45.040311  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:45.196524  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:45.196806  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:45.412967  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:45.541112  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:45.695648  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:45.698135  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:45.911238  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:46.040981  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:46.196195  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:46.197106  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:46.413618  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:46.541179  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:46.700444  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:46.700579  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:46.912054  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:47.040909  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:47.196050  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:47.197636  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:47.412575  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:47.540223  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:47.697846  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:47.698230  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:47.912349  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:48.042055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:48.195871  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:48.198187  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:48.411783  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:48.542256  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:48.695546  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:48.698050  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:48.911939  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:49.041385  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:49.196163  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:49.196353  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:49.412001  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:49.541756  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:49.695783  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:49.696993  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:49.911727  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:50.040528  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:50.196370  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:50.196506  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:50.412758  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:50.542086  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:50.697092  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:50.698043  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:50.912949  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:51.041707  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:51.195044  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:51.196530  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:51.412015  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:51.540676  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:51.695253  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:51.697115  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:51.911838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:52.040571  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:52.195854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:52.197767  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:52.412165  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:52.541296  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:52.695780  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:52.698078  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:52.911868  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:53.040966  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:53.196250  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:53.198224  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:53.412952  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:53.541033  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:53.695318  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:53.697975  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:53.918128  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:54.041725  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:54.196856  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:54.196973  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:54.412680  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:54.541389  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:54.696707  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:54.697417  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:54.911941  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:55.041035  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:55.195704  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:55.197936  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:55.416128  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:55.540974  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:55.696532  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:55.696605  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:55.912032  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:56.041144  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:56.195844  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:56.197459  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:56.412239  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:56.543267  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:56.696985  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:56.697065  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:56.911959  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:57.041069  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:57.196450  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:57.197271  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:57.411484  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:57.543169  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:57.698956  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:57.700761  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:57.912297  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:58.041754  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:58.195094  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:58.198093  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:58.412335  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:58.547820  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:58.697087  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:58.697216  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:58.911898  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:59.041334  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:59.195790  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:59.197287  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:59.412314  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:59.541645  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:59.695025  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:59.696945  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:59.913032  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:00.042206  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:00.195905  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:00.196849  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:00.413416  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:00.541940  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:00.695570  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:00.697495  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:00.912083  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:01.040980  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:01.197600  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:01.197750  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:01.413084  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:01.541003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:01.701147  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:01.701307  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:01.912239  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:02.041838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:02.197451  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:02.199381  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:02.411848  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:02.566113  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:02.697162  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:02.697404  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:02.912554  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:03.041300  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:03.197265  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:03.197744  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:03.412283  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:03.541313  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:03.697345  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:03.697362  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:03.912456  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:04.041433  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:04.196053  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:04.196386  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:04.411480  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:04.541396  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:04.697012  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:04.697122  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:04.912021  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:05.040986  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:05.196425  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:05.197055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:05.414392  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:05.543906  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:05.697548  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:05.699128  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:05.914449  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:06.059122  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:06.199035  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:06.199060  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:06.411555  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:06.548145  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:06.706728  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:06.710127  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:06.913080  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:07.045418  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:07.197313  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:07.198838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:07.412442  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:07.550194  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:07.698588  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:07.700102  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:07.912898  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:08.041523  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:08.200048  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:08.201764  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:08.416732  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:08.541668  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:08.696805  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:08.699404  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:08.913919  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:09.044035  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:09.202083  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:09.203196  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:09.424278  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:09.543568  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:09.694390  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:09.696701  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:09.928227  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:10.047635  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:10.202711  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:10.203091  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:10.412177  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:10.547099  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:10.701159  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:10.701427  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:11.010654  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:11.047478  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:11.197452  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:11.197505  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:11.412499  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:11.542966  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:11.699944  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:11.704028  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:11.913615  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:12.040550  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:12.202167  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:12.206422  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:12.414111  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:12.541586  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:12.701242  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:12.701392  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:12.911980  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:13.041255  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:13.196819  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:13.197262  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:13.415365  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:13.543891  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:13.698298  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:13.698483  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:14.024018  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:14.044419  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:14.200005  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:14.200056  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:14.414780  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:14.545383  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:14.698157  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:14.698240  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:14.912312  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:15.047766  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:15.200507  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:15.201630  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:15.414359  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:15.542858  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:15.696360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:15.697064  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:15.914308  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:16.042374  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:16.203003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:16.205670  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:16.415017  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:16.547467  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:16.698087  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:16.704457  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:16.912729  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:17.042682  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:17.197758  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:17.199865  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:17.415143  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:17.543861  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:17.697284  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:17.697374  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:17.912774  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:18.051952  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:18.198347  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:18.198464  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:18.413101  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:18.544220  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:18.697480  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:18.698347  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:18.912174  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:19.041195  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:19.195597  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:19.197754  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:19.411930  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:19.543107  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:19.695716  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:19.696506  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:19.911557  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:20.040046  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:20.195570  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:20.197384  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:20.411741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:20.540714  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:20.696789  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:20.696930  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:20.912723  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:21.040277  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:21.196028  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:21.196867  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:21.412845  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:21.540858  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:21.695001  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:21.697856  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:21.913055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:22.041789  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:22.195447  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:22.197793  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:22.412097  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:22.541100  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:22.695437  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:22.697741  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:22.912944  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:23.041760  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:23.195078  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:23.196924  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:23.411992  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:23.540721  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:23.696368  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:23.697952  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:23.924611  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:24.042614  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:24.195700  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:24.197113  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:24.412165  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:24.542539  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:24.696569  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:24.699116  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:24.912589  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:25.040194  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:25.197311  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:25.197820  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:25.412984  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:25.541361  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:25.696760  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:25.699224  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:25.911979  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:26.041540  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:26.195316  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:26.196448  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:26.412932  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:26.540888  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:26.695457  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:26.697418  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:26.912839  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:27.040692  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:27.198285  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:27.198389  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:27.413216  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:27.541054  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:27.695524  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:27.697682  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:27.912984  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:28.041063  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:28.196678  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:28.197402  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:28.412505  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:28.540110  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:28.695717  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:28.697736  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:28.912387  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:29.041413  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:29.194635  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:29.196805  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:29.412044  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:29.542788  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:29.695337  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:29.697270  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:29.911726  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:30.041798  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:30.195904  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:30.197264  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:30.411616  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:30.540508  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:30.697339  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:30.697782  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:30.913547  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:31.040452  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:31.195922  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:31.196535  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:31.411982  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:31.540478  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:31.695278  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:31.698540  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:31.912494  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:32.043856  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:32.197609  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:32.197819  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:32.412431  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:32.541752  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:32.696539  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:32.697403  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:32.912910  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:33.041048  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:33.195704  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:33.197917  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:33.412695  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:33.540734  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:33.695267  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:33.697595  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:33.912951  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:34.040593  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:34.194646  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:34.197266  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:34.411742  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:34.540945  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:34.698161  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:34.698313  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:34.912160  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:35.042016  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:35.195246  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:35.197425  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:35.412035  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:35.541521  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:35.694583  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:35.697617  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:35.911895  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:36.040992  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:36.196447  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:36.197672  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:36.412372  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:36.541192  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:36.697869  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:36.699654  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:36.912908  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:37.040956  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:37.196801  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:37.196942  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:37.411935  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:37.541058  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:37.694918  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:37.696472  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:37.912836  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:38.040660  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:38.194944  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:38.197448  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:38.411791  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:38.541124  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:38.697144  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:38.697809  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:38.912697  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:39.040461  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:39.194656  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:39.196407  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:39.411925  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:39.541913  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:39.695201  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:39.696659  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:39.912467  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:40.040722  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:40.195735  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:40.196428  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:40.412510  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:40.540388  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:40.695082  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:40.696506  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:40.912222  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:41.041847  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:41.197091  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:41.197567  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:41.412898  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:41.540868  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:41.697534  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:41.697902  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:41.912633  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:42.040266  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:42.196150  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:42.198086  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:42.411854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:42.541196  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:42.697014  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:42.698518  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:42.912523  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:43.040145  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:43.195475  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:43.197044  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:43.412242  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:43.540853  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:43.695064  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:43.696905  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:43.912652  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:44.040413  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:44.195534  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:44.196588  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:44.412338  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:44.541409  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:44.696199  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:44.696325  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:44.912351  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:45.041899  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:45.195069  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:45.197614  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:45.411817  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:45.540975  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:45.696734  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:45.696956  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:45.912435  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:46.041440  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:46.194926  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:46.197727  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:46.412648  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:46.540969  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:46.695484  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:46.699909  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:46.914009  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:47.043433  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:47.197574  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:47.197992  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:47.412334  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:47.541535  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:47.694711  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:47.695578  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:47.912973  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:48.040846  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:48.195216  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:48.196738  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:48.412375  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:48.542493  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:48.696138  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:48.696493  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:48.912933  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:49.041324  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:49.195693  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:49.196820  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:49.412830  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:49.542225  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:49.696086  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:49.696301  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:49.911815  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:50.040698  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:50.196781  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:50.196831  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:50.412677  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:50.541012  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:50.695171  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:50.696197  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:50.912665  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:51.040391  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:51.196254  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:51.196414  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:51.411787  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:51.546678  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:51.697612  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:51.697836  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:51.912801  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:52.041759  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:52.196133  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:52.197930  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:52.413220  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:52.541923  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:52.695369  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:52.696907  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:52.913889  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:53.040661  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:53.196476  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:53.196983  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:53.411076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:53.541834  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:53.696458  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:53.696646  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:53.912125  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:54.041959  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:54.195464  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:54.197614  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:54.412425  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:54.541916  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:54.698408  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:54.699009  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:54.912043  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:55.041485  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:55.196212  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:55.196795  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:55.412287  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:55.541805  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:55.695021  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:55.697644  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:55.912403  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:56.041280  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:56.196215  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:56.196845  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:56.412575  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:56.540086  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:56.695689  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:56.698723  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:56.912185  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:57.041278  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:57.195718  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:57.196357  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:57.415350  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:57.541275  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:57.695983  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:57.696509  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:57.912626  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:58.041539  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:58.195051  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:58.196590  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:58.411806  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:58.541000  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:58.696600  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:58.697564  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:58.912135  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:59.041032  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:59.196602  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:59.196745  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:59.412270  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:59.542109  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:59.696566  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:59.697555  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:59.912666  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:00.040543  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:00.195854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:00.196971  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:00.411627  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:00.541861  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:00.695413  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:00.697418  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:00.913487  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:01.042233  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:01.195394  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:01.197595  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:01.412418  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:01.541658  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:01.697383  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:01.698671  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:01.914029  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:02.042979  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:02.198534  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:02.198746  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:02.413488  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:02.540091  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:02.699039  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:02.699243  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:02.913260  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:03.042351  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:03.196133  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:03.196797  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:03.412055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:03.540813  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:03.695810  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:03.696278  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:03.912607  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:04.040336  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:04.195923  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:04.197906  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:04.412339  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:04.541423  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:04.697319  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:04.697522  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:04.911871  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:05.042054  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:05.196538  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:05.197169  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:05.412220  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:05.541589  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:05.695349  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:05.697593  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:05.911341  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:06.041432  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:06.196668  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:06.196868  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:06.412383  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:06.541423  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:06.696264  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:06.696298  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:06.912896  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:07.041463  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:07.196315  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:07.196358  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:07.411993  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:07.540392  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:07.696388  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:07.697104  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:07.915258  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:08.041599  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:08.196372  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:08.197301  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:08.413332  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:08.541386  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:08.700566  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:08.700862  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:08.912947  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:09.042060  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:09.195871  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:09.197176  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:09.411919  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:09.541000  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:09.695999  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:09.697329  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:09.912222  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:10.042718  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:10.196344  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:10.196356  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:10.411381  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:10.541528  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:10.696498  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:10.698344  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:10.912694  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:11.041130  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:11.195351  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:11.197663  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:11.412589  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:11.540341  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:11.697469  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:11.699618  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:11.912519  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:12.041597  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:12.195947  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:12.197653  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:12.412598  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:12.540709  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:12.696715  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:12.698050  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:12.928026  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:13.045349  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:13.197477  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:13.197509  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:13.412404  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:13.542237  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:13.695946  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:13.696615  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:13.911988  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:14.040943  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:14.196098  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:14.197927  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:14.413238  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:14.544438  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:14.696894  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:14.697344  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:14.912008  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:15.040574  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:15.196776  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:15.197552  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:15.411737  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:15.540831  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:15.696509  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:15.698462  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:15.913052  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:16.041218  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:16.195703  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:16.198051  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:16.411991  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:16.541641  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:16.695803  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:16.696905  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:16.911898  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:17.041120  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:17.197033  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:17.197238  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:17.411848  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:17.541354  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:17.695478  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:17.696944  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:17.913147  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:18.042436  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:18.196145  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:18.196396  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:18.411839  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:18.540354  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:18.696245  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:18.697787  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:18.912398  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:19.042406  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:19.196069  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:19.199304  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:19.412956  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:19.544144  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:19.699711  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:19.702208  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:19.911864  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:20.043905  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:20.198903  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:20.199000  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:20.422787  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:20.544150  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:20.700200  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:20.704668  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:20.917309  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:21.046645  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:21.195931  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:21.196243  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:21.413120  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:21.546436  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:21.700169  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:21.700244  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:21.914444  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:22.047410  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:22.200096  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:22.203625  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:22.417682  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:22.546754  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:22.698047  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:22.703046  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:22.914138  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:23.045377  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:23.200843  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:23.201169  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:23.413890  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:23.543372  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:23.696626  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:23.697747  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:23.916852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:24.043284  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:24.196831  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:24.196918  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:24.412927  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:24.541136  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:24.699213  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:24.701183  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:24.914129  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:25.043092  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:25.321913  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:25.323020  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:25.413045  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:25.542232  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:25.698054  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:25.699672  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:25.914677  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:26.045854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:26.197868  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:26.197922  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:26.411543  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:26.550068  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:26.695076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:26.699427  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:26.912378  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:27.042894  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:27.197017  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:27.199935  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:27.417341  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:27.541748  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:27.695988  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:27.698216  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:27.911305  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:28.043682  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:28.203306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:28.203790  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:28.413791  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:28.548187  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:28.705853  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:28.706201  248270 kapi.go:107] duration metric: took 2m30.513642085s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 05:32:28.911698  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:29.040817  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:29.197936  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:29.421552  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:29.541741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:29.696634  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:29.912225  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:30.041603  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:30.195204  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:30.411259  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:30.541923  248270 kapi.go:107] duration metric: took 2m30.505431248s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 05:32:30.700458  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:30.916295  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:31.198295  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:31.413624  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:31.701677  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:31.934977  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:32.199414  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:32.418730  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:32.695325  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:32.913157  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:33.197510  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:33.413102  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:33.696635  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:33.912976  248270 kapi.go:107] duration metric: took 2m32.004940635s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 05:32:33.914947  248270 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-819501 cluster.
	I1210 05:32:33.916673  248270 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 05:32:33.918159  248270 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 05:32:34.195642  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:34.696627  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:35.196024  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:35.695782  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:36.195094  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:36.696145  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:37.195456  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:37.695683  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:38.240338  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:38.696247  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:39.196496  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:39.695741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:40.196422  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:40.696471  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:41.196332  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:41.695958  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:42.196570  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:42.695829  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:43.195357  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:43.695795  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:44.195089  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:44.697101  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:45.195298  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:45.696306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:46.196188  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:46.697545  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:47.195693  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:47.697106  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:48.196501  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:48.696838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:49.196330  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:49.697408  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:50.196426  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:50.695540  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:51.196892  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:51.696131  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:52.195514  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:52.695825  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:53.195964  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:53.696041  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:54.195223  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:54.695628  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:55.196841  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:55.695740  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:56.194920  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:56.696783  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:57.195842  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:57.696514  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:58.196291  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:58.696032  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:59.195757  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:59.695913  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:00.196003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:00.697408  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:01.196132  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:01.696525  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:02.195519  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:02.696676  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:03.196468  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:03.697155  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:04.196040  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:04.695496  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:05.197092  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:05.696051  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:06.194871  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:06.696031  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:07.196751  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:07.697187  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:08.195603  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:08.696317  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:09.196085  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:09.696248  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:10.196156  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:10.695296  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:11.196584  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:11.700018  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:12.195142  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:12.697083  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:13.195224  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:13.695306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:14.196901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:14.696058  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:15.195574  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:15.696336  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:16.195623  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:16.698100  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:17.197329  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:17.695675  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:18.196683  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:18.697097  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:19.195492  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:19.695645  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:20.197262  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:20.695427  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:21.196795  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:21.697360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:22.196642  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:22.696362  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:23.195333  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:23.695816  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:24.195957  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:24.694821  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:25.195992  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:25.695334  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:26.196484  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:26.697312  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:27.196789  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:27.695382  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:28.197511  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:28.696029  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:29.195379  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:29.696478  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:30.196742  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:30.696617  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:31.196691  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:31.696332  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:32.197105  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:32.695261  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:33.195565  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:33.696412  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:34.196853  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:34.695141  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:35.196017  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:35.694870  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:36.196760  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:36.695716  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:37.197084  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:37.694798  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:38.195481  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:38.696818  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:39.195103  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:39.695287  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:40.196285  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:40.695441  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:41.196901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:41.695151  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:42.196592  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:42.695791  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:43.195539  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:43.697526  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:44.195494  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:44.697258  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:45.195374  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:45.696246  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:46.195360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:46.696595  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:47.195386  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:47.696495  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:48.195396  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:48.696399  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:49.195926  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:49.695752  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:50.196644  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:50.696819  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:51.196463  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:51.695510  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:52.196329  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:52.695117  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:53.195544  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:53.696499  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:54.196995  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:54.695938  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:55.196716  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:55.696111  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:56.195836  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:56.697347  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:57.196443  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:57.696024  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:58.196687  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:58.696411  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:59.195151  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:59.696081  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:00.196766  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:00.695605  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:01.196288  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:01.696235  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:02.195823  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:02.695868  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:03.196217  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:03.695561  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:04.195933  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:04.696154  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:05.195390  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:05.696250  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:06.196127  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:06.695227  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:07.200317  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:07.696082  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:08.197235  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:08.696004  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:09.196361  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:09.695240  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:10.196104  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:10.695419  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:11.196319  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:11.694964  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:12.196974  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:12.696337  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:13.195978  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:13.696471  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:14.195269  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:14.695674  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:15.197294  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:15.695137  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:16.196248  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:16.694996  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:17.196381  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:17.695404  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:18.196195  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:18.695758  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:19.196179  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:19.695177  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:20.195565  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:20.696242  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:21.196197  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:21.695191  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:22.196188  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:22.694979  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:23.194913  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:23.697081  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:24.196158  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:24.694728  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:25.196611  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:25.697492  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:26.196964  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:26.696515  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:27.195840  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:27.695719  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:28.196490  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:28.696390  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:29.195290  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:29.695663  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:30.201407  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:30.695974  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:31.197190  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:31.695471  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:32.196744  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:32.696589  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:33.195808  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:33.696661  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:34.196415  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:34.695620  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:35.195699  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:35.696759  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:36.196702  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:36.695614  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:37.194964  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:37.696076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:38.196210  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:38.696076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:39.196613  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:39.696165  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:40.198418  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:40.695943  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:41.197398  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:41.695772  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:42.195040  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:42.696314  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:43.195340  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:43.696359  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:44.196391  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:44.696030  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:45.195397  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:45.696360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:46.195580  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:46.696472  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:47.196255  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:47.695598  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:48.195578  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:48.696577  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:49.197485  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:49.695598  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:50.196297  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:50.694812  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:51.196752  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:51.697037  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:52.196530  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:52.696049  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:53.196414  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:53.696286  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:54.196246  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:54.696013  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:55.197119  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:55.695940  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:56.195722  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:56.695900  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:57.195405  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:57.694853  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:58.195846  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:58.696065  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:59.196493  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:59.695901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:00.196143  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:00.695285  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:01.197326  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:01.694920  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:02.194762  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:02.696331  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:03.195592  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:03.696344  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:04.195284  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:04.695602  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:05.195944  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:05.695625  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:06.195741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:06.697910  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:07.196058  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:07.694812  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:08.195849  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:08.697178  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:09.195957  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:09.696494  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:10.196796  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:10.696794  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:11.196939  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:11.695777  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:12.196354  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:12.697453  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:13.195090  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:13.695593  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:14.195988  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:14.699457  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:15.196105  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:15.695271  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:16.195646  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:16.696815  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:17.195859  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:17.696286  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:18.195152  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:18.696022  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:19.196238  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:19.696329  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:20.195730  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:20.696445  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:21.196514  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:21.695588  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:22.197806  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:22.697353  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:23.196082  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:23.696133  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:24.196367  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:24.696099  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:25.195852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:25.695686  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:26.195500  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:26.697384  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:27.196018  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:27.694823  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:28.196076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:28.695462  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:29.195471  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:29.696297  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:30.196124  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:30.696556  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:31.196283  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:31.695815  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:32.196641  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:32.697074  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:33.195733  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:33.697646  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:34.195935  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:34.696025  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:35.195838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:35.695200  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:36.196244  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:36.696951  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:37.196003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:37.695262  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:38.196037  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:38.695111  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:39.195459  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:39.696434  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:40.198620  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:40.697017  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:41.195949  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:41.695731  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:42.195894  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:42.695975  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:43.194970  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:43.695677  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:44.197208  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:44.695958  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:45.196050  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:45.695370  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:46.195901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:46.695795  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:47.196346  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:47.695774  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:48.194982  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:48.697065  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:49.195134  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:49.694745  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:50.195670  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:50.695951  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:51.196052  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:51.696732  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:52.197195  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:52.696093  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:53.195202  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:53.695405  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:54.196119  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:54.696476  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:55.196403  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:55.695788  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:56.195036  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:56.696466  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:57.198284  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:57.695289  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:58.191968  248270 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1210 05:35:58.192004  248270 kapi.go:107] duration metric: took 6m0.000635436s to wait for kubernetes.io/minikube-addons=registry ...
	W1210 05:35:58.192136  248270 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1210 05:35:58.194007  248270 out.go:179] * Enabled addons: inspektor-gadget, ingress-dns, storage-provisioner, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, default-storageclass, registry-creds, metrics-server, yakd, volumesnapshots, ingress, csi-hostpath-driver, gcp-auth
	I1210 05:35:58.195457  248270 addons.go:530] duration metric: took 6m11.428581243s for enable addons: enabled=[inspektor-gadget ingress-dns storage-provisioner amd-gpu-device-plugin cloud-spanner nvidia-device-plugin default-storageclass registry-creds metrics-server yakd volumesnapshots ingress csi-hostpath-driver gcp-auth]
	I1210 05:35:58.195518  248270 start.go:247] waiting for cluster config update ...
	I1210 05:35:58.195551  248270 start.go:256] writing updated cluster config ...
	I1210 05:35:58.195954  248270 ssh_runner.go:195] Run: rm -f paused
	I1210 05:35:58.205700  248270 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:35:58.211367  248270 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lwtl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.216998  248270 pod_ready.go:94] pod "coredns-66bc5c9577-lwtl7" is "Ready"
	I1210 05:35:58.217026  248270 pod_ready.go:86] duration metric: took 5.6329ms for pod "coredns-66bc5c9577-lwtl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.219507  248270 pod_ready.go:83] waiting for pod "etcd-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.225094  248270 pod_ready.go:94] pod "etcd-addons-819501" is "Ready"
	I1210 05:35:58.225120  248270 pod_ready.go:86] duration metric: took 5.593139ms for pod "etcd-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.227244  248270 pod_ready.go:83] waiting for pod "kube-apiserver-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.231410  248270 pod_ready.go:94] pod "kube-apiserver-addons-819501" is "Ready"
	I1210 05:35:58.231431  248270 pod_ready.go:86] duration metric: took 4.167307ms for pod "kube-apiserver-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.234812  248270 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.610624  248270 pod_ready.go:94] pod "kube-controller-manager-addons-819501" is "Ready"
	I1210 05:35:58.610654  248270 pod_ready.go:86] duration metric: took 375.820379ms for pod "kube-controller-manager-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.811334  248270 pod_ready.go:83] waiting for pod "kube-proxy-ngpzv" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:59.211461  248270 pod_ready.go:94] pod "kube-proxy-ngpzv" is "Ready"
	I1210 05:35:59.211491  248270 pod_ready.go:86] duration metric: took 400.130316ms for pod "kube-proxy-ngpzv" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:59.410382  248270 pod_ready.go:83] waiting for pod "kube-scheduler-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:59.811154  248270 pod_ready.go:94] pod "kube-scheduler-addons-819501" is "Ready"
	I1210 05:35:59.811187  248270 pod_ready.go:86] duration metric: took 400.778411ms for pod "kube-scheduler-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:59.811204  248270 pod_ready.go:40] duration metric: took 1.605466877s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:35:59.859434  248270 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 05:35:59.861511  248270 out.go:179] * Done! kubectl is now configured to use "addons-819501" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.711891529Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15f78293-9177-4ed2-bd5c-a4db462e4a94 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.713614802Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be8aef59-7388-40ed-8350-3dadb83c912a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.716224281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765345339716194282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:457883,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be8aef59-7388-40ed-8350-3dadb83c912a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.717461845Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=909ddbe3-c4d3-4f05-acaa-fbe42847edad name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.717528306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=909ddbe3-c4d3-4f05-acaa-fbe42847edad name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.717800100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25e1deb92b90406521b7a79e047d89c517cd035a44e88d682c0941a2f0d94ceb,PodSandboxId:ba29e6b659914c39c0bee679c2c220da4d0fd56110470e69c0b07a5dc5a75614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345033128650784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 390b2e80-0538-4ebe-ae5c-2e24388c48e0,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73d958852375d517ec8e880e264554b882f25961e3b94c83033ce6fdf91dfcd,PodSandboxId:14f0fd53f12b8b26bc3ff88bf56950e4a57714e825ab5e0f66c12e4379f63f4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765344963193852703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 479ad0f6-afd3-427d-9618-0e77a36d2f86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668d0c1d28737b284756a644d3460fac7e536f89dc7c5792d6ff78707639682,PodSandboxId:1e43763de09da6e66f05a5b77ff6a2d0f556cf6c72f5baacc6667687d5ad0c9a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765344666031370479,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vsz96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db81abdfb84eb3c48379ba982a1f44c8e7d2887cfe2d24f6d0d020e96bff32,PodSandboxId:a5a64939f7ce8b3d1d4298014698f9340fd0246eb2520d98dd053abe11a1aaaa,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765344634678579440,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-25pr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945db864-8d9f-4e37-b866-28b
9f77d42c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5e7c29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320,PodSandboxId:3380b4d1273bef6f6154c47e1dadd09e5631fc6543ce2cce181b3c0c1c09ad4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765344597013974389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storag
e-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ed88a2-563b-4ee6-9a9a-94669a45bd2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203b77791ed58bed3f44112a9e6cd0fd2fb732df525a634b3ada0ef925ee220c,PodSandboxId:9061092924ee0fdfef082b98a2aa59defc43ab9ccd0f5374e3dd05a3a81d5354,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765344596045116432,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugi
n,io.kubernetes.pod.name: amd-gpu-device-plugin-xwmk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca338a7b-5d2c-4894-a615-0224cddd49ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d9dc695ee76bd24,PodSandboxId:5c479d4bba0cae8bb7cf900777ec470fcb9191f52aa792281fea13aabd9dea07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765344589483389368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: co
redns-66bc5c9577-lwtl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84b8912-1587-45b3-956c-791ea7ec71c6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68,PodSandboxId:1b437dd96d11016f50ea95521ddb655effee608acd467d4c5b86608938bce0e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8
e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765344588559660009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngpzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c58eba-0463-42c9-a9d6-3c579349bd49,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9,PodSandboxId:f274a008098638e228e863ee3091aab6eb97a0b43dd678f1969a077a1a9cdd3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1b
e02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765344575603886543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa13790632d350c6bc30d2faa0b6f981,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948,PodSandboxId:27b140f7dc29d6de2662a56308de71b940b56f046ecec1f25c3af063d060eba9,Metadata:&Co
ntainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765344575527641867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0266aeda1eb6dc0732ac0ca983358e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f05e43ec5e70f9d6ff09ee
4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f,PodSandboxId:3310898dd2e7bb9fa4eaa4121131d47401c80bff34c6a4866a34165c6378f9b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765344575558920700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2613b60cb2b81953748c1f1f1ecd406,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a,PodSandboxId:079499566b48e559bfd84ff27fd1e0980238a3d2a6146a461496ea2fb96ba2fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765344575523607276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9414453fd34af8fe84f77d6b515bc5e6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=909ddbe3-c4d3-4f05-acaa-fbe42847edad name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.756479039Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f1a25ea-b08f-4448-bd94-81017e388a3f name=/runtime.v1.RuntimeService/Version
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.756687318Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f1a25ea-b08f-4448-bd94-81017e388a3f name=/runtime.v1.RuntimeService/Version
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.758856845Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44466e27-254e-4046-8085-65b919a88e9d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.760023203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765345339759993540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:457883,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44466e27-254e-4046-8085-65b919a88e9d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.761179893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68d332f4-847e-4c13-a365-fd3286599f9b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.761253555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68d332f4-847e-4c13-a365-fd3286599f9b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.761604330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25e1deb92b90406521b7a79e047d89c517cd035a44e88d682c0941a2f0d94ceb,PodSandboxId:ba29e6b659914c39c0bee679c2c220da4d0fd56110470e69c0b07a5dc5a75614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345033128650784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 390b2e80-0538-4ebe-ae5c-2e24388c48e0,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73d958852375d517ec8e880e264554b882f25961e3b94c83033ce6fdf91dfcd,PodSandboxId:14f0fd53f12b8b26bc3ff88bf56950e4a57714e825ab5e0f66c12e4379f63f4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765344963193852703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 479ad0f6-afd3-427d-9618-0e77a36d2f86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668d0c1d28737b284756a644d3460fac7e536f89dc7c5792d6ff78707639682,PodSandboxId:1e43763de09da6e66f05a5b77ff6a2d0f556cf6c72f5baacc6667687d5ad0c9a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765344666031370479,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vsz96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db81abdfb84eb3c48379ba982a1f44c8e7d2887cfe2d24f6d0d020e96bff32,PodSandboxId:a5a64939f7ce8b3d1d4298014698f9340fd0246eb2520d98dd053abe11a1aaaa,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765344634678579440,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-25pr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945db864-8d9f-4e37-b866-28b
9f77d42c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5e7c29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320,PodSandboxId:3380b4d1273bef6f6154c47e1dadd09e5631fc6543ce2cce181b3c0c1c09ad4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765344597013974389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storag
e-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ed88a2-563b-4ee6-9a9a-94669a45bd2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203b77791ed58bed3f44112a9e6cd0fd2fb732df525a634b3ada0ef925ee220c,PodSandboxId:9061092924ee0fdfef082b98a2aa59defc43ab9ccd0f5374e3dd05a3a81d5354,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765344596045116432,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugi
n,io.kubernetes.pod.name: amd-gpu-device-plugin-xwmk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca338a7b-5d2c-4894-a615-0224cddd49ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d9dc695ee76bd24,PodSandboxId:5c479d4bba0cae8bb7cf900777ec470fcb9191f52aa792281fea13aabd9dea07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765344589483389368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: co
redns-66bc5c9577-lwtl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84b8912-1587-45b3-956c-791ea7ec71c6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68,PodSandboxId:1b437dd96d11016f50ea95521ddb655effee608acd467d4c5b86608938bce0e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8
e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765344588559660009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngpzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c58eba-0463-42c9-a9d6-3c579349bd49,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9,PodSandboxId:f274a008098638e228e863ee3091aab6eb97a0b43dd678f1969a077a1a9cdd3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1b
e02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765344575603886543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa13790632d350c6bc30d2faa0b6f981,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948,PodSandboxId:27b140f7dc29d6de2662a56308de71b940b56f046ecec1f25c3af063d060eba9,Metadata:&Co
ntainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765344575527641867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0266aeda1eb6dc0732ac0ca983358e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f05e43ec5e70f9d6ff09ee
4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f,PodSandboxId:3310898dd2e7bb9fa4eaa4121131d47401c80bff34c6a4866a34165c6378f9b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765344575558920700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2613b60cb2b81953748c1f1f1ecd406,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a,PodSandboxId:079499566b48e559bfd84ff27fd1e0980238a3d2a6146a461496ea2fb96ba2fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765344575523607276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9414453fd34af8fe84f77d6b515bc5e6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68d332f4-847e-4c13-a365-fd3286599f9b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.771503713Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a8fb87ae-72fa-4189-93ef-a965a4b34baa name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.772424304Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8fdc7f05efe146ff38a143837de82eca8b7d0a60e55f483a9b9d7578c08c90ec,Metadata:&PodSandboxMetadata{Name:helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042,Uid:910c9402-5cd7-402a-8d7f-a1cdba98f550,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345339314572217,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 910c9402-5cd7-402a-8d7f-a1cdba98f550,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:42:18.995568074Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1afbf99ee862a55fbd8eca5936830e400952649744c0962f36841802dc4fe9fc,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-t67b9,Uid:7f40dad2-5164-457
5-a745-f97826e47fed,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345172820085487,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-t67b9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f40dad2-5164-4575-a745-f97826e47fed,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:39:32.499034915Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ba29e6b659914c39c0bee679c2c220da4d0fd56110470e69c0b07a5dc5a75614,Metadata:&PodSandboxMetadata{Name:nginx,Uid:390b2e80-0538-4ebe-ae5c-2e24388c48e0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345025727185076,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 390b2e80-0538-4ebe-ae5c-2e24388c48e0,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:37:05.40
4943628Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:14f0fd53f12b8b26bc3ff88bf56950e4a57714e825ab5e0f66c12e4379f63f4b,Metadata:&PodSandboxMetadata{Name:busybox,Uid:479ad0f6-afd3-427d-9618-0e77a36d2f86,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344960797434068,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 479ad0f6-afd3-427d-9618-0e77a36d2f86,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:36:00.473208652Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1e43763de09da6e66f05a5b77ff6a2d0f556cf6c72f5baacc6667687d5ad0c9a,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-648f6765c9-vsz96,Uid:5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344595665388121,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container
.name: POD,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vsz96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,pod-template-hash: 648f6765c9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:29:54.666677750Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a5a64939f7ce8b3d1d4298014698f9340fd0246eb2520d98dd053abe11a1aaaa,Metadata:&PodSandboxMetadata{Name:registry-proxy-25pr7,Uid:945db864-8d9f-4e37-b866-28b9f77d42c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344595504860969,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,controller-revision-hash: 65b944f647,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-proxy-25pr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945db864-8d9f-4e37-b866-28b9f77d42c3,kubernetes.io/minikube-addons: registry,pod-template-generation: 1,registry-proxy: true,},Annotations:map[string]string{kub
ernetes.io/config.seen: 2025-12-10T05:29:54.273097725Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2cb22f113efb3b0c813b111e484b9ba307c09bbf3c3a0b67063c79e85e76d20b,Metadata:&PodSandboxMetadata{Name:registry-6b586f9694-lkhvn,Uid:0a8387d7-19c7-49cd-8425-48c60f2e70ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344595483407014,Labels:map[string]string{actual-registry: true,addonmanager.kubernetes.io/mode: Reconcile,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-6b586f9694-lkhvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a8387d7-19c7-49cd-8425-48c60f2e70ae,kubernetes.io/minikube-addons: registry,pod-template-hash: 6b586f9694,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:29:53.770566453Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3380b4d1273bef6f6154c47e1dadd09e5631fc6543ce2cce181b3c0c1c09ad4f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:76ed88a2-563b-4ee6-9a9a-9
4669a45bd2a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344595471609847,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ed88a2-563b-4ee6-9a9a-94669a45bd2a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provis
ioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-10T05:29:54.306781577Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9061092924ee0fdfef082b98a2aa59defc43ab9ccd0f5374e3dd05a3a81d5354,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-xwmk6,Uid:ca338a7b-5d2c-4894-a615-0224cddd49ff,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344590914749277,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-xwmk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca338a7b-5d2c-4894-a615-0224cddd49ff,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:29:50.584238948Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c479d4bba0cae8bb7cf900777ec470fcb9191f52aa792281fea
13aabd9dea07,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-lwtl7,Uid:d84b8912-1587-45b3-956c-791ea7ec71c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344588377434877,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-lwtl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84b8912-1587-45b3-956c-791ea7ec71c6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:29:47.992625821Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b437dd96d11016f50ea95521ddb655effee608acd467d4c5b86608938bce0e7,Metadata:&PodSandboxMetadata{Name:kube-proxy-ngpzv,Uid:75c58eba-0463-42c9-a9d6-3c579349bd49,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344588140379911,Labels:map[string]string{controller-revision-hash: 55c7cb7b75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ngpzv,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 75c58eba-0463-42c9-a9d6-3c579349bd49,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:29:47.793340995Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f274a008098638e228e863ee3091aab6eb97a0b43dd678f1969a077a1a9cdd3d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-819501,Uid:aa13790632d350c6bc30d2faa0b6f981,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344575305129363,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa13790632d350c6bc30d2faa0b6f981,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aa13790632d350c6bc30d2faa0b6f981,kubernetes.io/config.seen: 2025-12-10T05:29:34.778671541Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:27b140f7dc29d6de2662a56308de7
1b940b56f046ecec1f25c3af063d060eba9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-819501,Uid:5a0266aeda1eb6dc0732ac0ca983358e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344575293580998,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0266aeda1eb6dc0732ac0ca983358e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5a0266aeda1eb6dc0732ac0ca983358e,kubernetes.io/config.seen: 2025-12-10T05:29:34.778670395Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:079499566b48e559bfd84ff27fd1e0980238a3d2a6146a461496ea2fb96ba2fb,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-819501,Uid:9414453fd34af8fe84f77d6b515bc5e6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344575286925560,Labels:map[string]string{component: kube-apiserver,io.
kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9414453fd34af8fe84f77d6b515bc5e6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.227:8443,kubernetes.io/config.hash: 9414453fd34af8fe84f77d6b515bc5e6,kubernetes.io/config.seen: 2025-12-10T05:29:34.778669039Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3310898dd2e7bb9fa4eaa4121131d47401c80bff34c6a4866a34165c6378f9b9,Metadata:&PodSandboxMetadata{Name:etcd-addons-819501,Uid:f2613b60cb2b81953748c1f1f1ecd406,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344575284934789,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2613b60cb2b81953748c1f1f1ecd406,tier: control-plane,},Annotations:map[string]string{kubeadm.kuber
netes.io/etcd.advertise-client-urls: https://192.168.50.227:2379,kubernetes.io/config.hash: f2613b60cb2b81953748c1f1f1ecd406,kubernetes.io/config.seen: 2025-12-10T05:29:34.778666461Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a8fb87ae-72fa-4189-93ef-a965a4b34baa name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.773608161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2c8beff-e54b-4af2-96df-a07e340b5707 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.773664096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2c8beff-e54b-4af2-96df-a07e340b5707 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.773908780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25e1deb92b90406521b7a79e047d89c517cd035a44e88d682c0941a2f0d94ceb,PodSandboxId:ba29e6b659914c39c0bee679c2c220da4d0fd56110470e69c0b07a5dc5a75614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345033128650784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 390b2e80-0538-4ebe-ae5c-2e24388c48e0,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73d958852375d517ec8e880e264554b882f25961e3b94c83033ce6fdf91dfcd,PodSandboxId:14f0fd53f12b8b26bc3ff88bf56950e4a57714e825ab5e0f66c12e4379f63f4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765344963193852703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 479ad0f6-afd3-427d-9618-0e77a36d2f86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668d0c1d28737b284756a644d3460fac7e536f89dc7c5792d6ff78707639682,PodSandboxId:1e43763de09da6e66f05a5b77ff6a2d0f556cf6c72f5baacc6667687d5ad0c9a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765344666031370479,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vsz96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db81abdfb84eb3c48379ba982a1f44c8e7d2887cfe2d24f6d0d020e96bff32,PodSandboxId:a5a64939f7ce8b3d1d4298014698f9340fd0246eb2520d98dd053abe11a1aaaa,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765344634678579440,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-25pr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945db864-8d9f-4e37-b866-28b
9f77d42c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5e7c29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320,PodSandboxId:3380b4d1273bef6f6154c47e1dadd09e5631fc6543ce2cce181b3c0c1c09ad4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765344597013974389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storag
e-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ed88a2-563b-4ee6-9a9a-94669a45bd2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203b77791ed58bed3f44112a9e6cd0fd2fb732df525a634b3ada0ef925ee220c,PodSandboxId:9061092924ee0fdfef082b98a2aa59defc43ab9ccd0f5374e3dd05a3a81d5354,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765344596045116432,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugi
n,io.kubernetes.pod.name: amd-gpu-device-plugin-xwmk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca338a7b-5d2c-4894-a615-0224cddd49ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d9dc695ee76bd24,PodSandboxId:5c479d4bba0cae8bb7cf900777ec470fcb9191f52aa792281fea13aabd9dea07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765344589483389368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: co
redns-66bc5c9577-lwtl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84b8912-1587-45b3-956c-791ea7ec71c6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68,PodSandboxId:1b437dd96d11016f50ea95521ddb655effee608acd467d4c5b86608938bce0e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8
e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765344588559660009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngpzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c58eba-0463-42c9-a9d6-3c579349bd49,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9,PodSandboxId:f274a008098638e228e863ee3091aab6eb97a0b43dd678f1969a077a1a9cdd3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1b
e02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765344575603886543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa13790632d350c6bc30d2faa0b6f981,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948,PodSandboxId:27b140f7dc29d6de2662a56308de71b940b56f046ecec1f25c3af063d060eba9,Metadata:&Co
ntainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765344575527641867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0266aeda1eb6dc0732ac0ca983358e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f05e43ec5e70f9d6ff09ee
4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f,PodSandboxId:3310898dd2e7bb9fa4eaa4121131d47401c80bff34c6a4866a34165c6378f9b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765344575558920700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2613b60cb2b81953748c1f1f1ecd406,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a,PodSandboxId:079499566b48e559bfd84ff27fd1e0980238a3d2a6146a461496ea2fb96ba2fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765344575523607276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9414453fd34af8fe84f77d6b515bc5e6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2c8beff-e54b-4af2-96df-a07e340b5707 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.774924846Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 910c9402-5cd7-402a-8d7f-a1cdba98f550,},},}" file="otel-collector/interceptors.go:62" id=b63b8999-9a85-4bef-9d04-e04392a03daf name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.775017839Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8fdc7f05efe146ff38a143837de82eca8b7d0a60e55f483a9b9d7578c08c90ec,Metadata:&PodSandboxMetadata{Name:helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042,Uid:910c9402-5cd7-402a-8d7f-a1cdba98f550,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345339314572217,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 910c9402-5cd7-402a-8d7f-a1cdba98f550,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:42:18.995568074Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b63b8999-9a85-4bef-9d04-e04392a03daf name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.776048230Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:8fdc7f05efe146ff38a143837de82eca8b7d0a60e55f483a9b9d7578c08c90ec,Verbose:false,}" file="otel-collector/interceptors.go:62" id=0a69ed97-064b-4fed-9cec-1f940abd5e5b name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.776411111Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:8fdc7f05efe146ff38a143837de82eca8b7d0a60e55f483a9b9d7578c08c90ec,Metadata:&PodSandboxMetadata{Name:helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042,Uid:910c9402-5cd7-402a-8d7f-a1cdba98f550,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345339314572217,Network:&PodSandboxNetworkStatus{Ip:10.244.0.32,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:&UserNamespace{Mode:NODE,Uids:[]*IDMapping{},Gids:[]*IDMapping{},},},},},Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 910c9402-5cd7-402a-8d7f-a1cdba98f550,},Annotations:map[string]string{k
ubernetes.io/config.seen: 2025-12-10T05:42:18.995568074Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=0a69ed97-064b-4fed-9cec-1f940abd5e5b name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.777535222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 910c9402-5cd7-402a-8d7f-a1cdba98f550,},},}" file="otel-collector/interceptors.go:62" id=14e67917-8bea-491d-b469-2dc467c4b73b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.777646521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14e67917-8bea-491d-b469-2dc467c4b73b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:42:19 addons-819501 crio[812]: time="2025-12-10 05:42:19.777935249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=14e67917-8bea-491d-b469-2dc467c4b73b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                       NAMESPACE
	25e1deb92b904       public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9                 5 minutes ago       Running             nginx                     0                   ba29e6b659914       nginx                                     default
	c73d958852375       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                6 minutes ago       Running             busybox                   0                   14f0fd53f12b8       busybox                                   default
	1668d0c1d2873       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef   11 minutes ago      Running             local-path-provisioner    0                   1e43763de09da       local-path-provisioner-648f6765c9-vsz96   local-path-storage
	82db81abdfb84       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac    11 minutes ago      Running             registry-proxy            0                   a5a64939f7ce8       registry-proxy-25pr7                      kube-system
	cb5e7c29a0f38       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   12 minutes ago      Running             storage-provisioner       0                   3380b4d1273be       storage-provisioner                       kube-system
	203b77791ed58       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f           12 minutes ago      Running             amd-gpu-device-plugin     0                   9061092924ee0       amd-gpu-device-plugin-xwmk6               kube-system
	56051fcb51898       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                   12 minutes ago      Running             coredns                   0                   5c479d4bba0ca       coredns-66bc5c9577-lwtl7                  kube-system
	6bca39dd5c266       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                   12 minutes ago      Running             kube-proxy                0                   1b437dd96d110       kube-proxy-ngpzv                          kube-system
	1326c7547c796       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                   12 minutes ago      Running             kube-scheduler            0                   f274a00809863       kube-scheduler-addons-819501              kube-system
	f05e43ec5e70f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                   12 minutes ago      Running             etcd                      0                   3310898dd2e7b       etcd-addons-819501                        kube-system
	7c800fe0c31f2       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                   12 minutes ago      Running             kube-controller-manager   0                   27b140f7dc29d       kube-controller-manager-addons-819501     kube-system
	633a185de0b3b       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                   12 minutes ago      Running             kube-apiserver            0                   079499566b48e       kube-apiserver-addons-819501              kube-system
	
	
	==> coredns [56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d9dc695ee76bd24] <==
	[INFO] 10.244.0.7:49475 - 4234 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00012427s
	[INFO] 10.244.0.7:34991 - 47607 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000209559s
	[INFO] 10.244.0.7:34991 - 33015 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000164505s
	[INFO] 10.244.0.7:34991 - 21517 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000085343s
	[INFO] 10.244.0.7:34991 - 20580 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000085858s
	[INFO] 10.244.0.7:34991 - 19601 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000202983s
	[INFO] 10.244.0.7:34991 - 15879 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000136513s
	[INFO] 10.244.0.7:34991 - 31609 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000084825s
	[INFO] 10.244.0.7:34991 - 9951 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000077467s
	[INFO] 10.244.0.7:33989 - 13226 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000166541s
	[INFO] 10.244.0.7:33989 - 23409 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000273576s
	[INFO] 10.244.0.7:33989 - 29116 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000140993s
	[INFO] 10.244.0.7:33989 - 11043 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000219382s
	[INFO] 10.244.0.7:33989 - 4888 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000077041s
	[INFO] 10.244.0.7:33989 - 12162 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000162634s
	[INFO] 10.244.0.7:33989 - 9165 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000098291s
	[INFO] 10.244.0.7:33989 - 17334 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00015135s
	[INFO] 10.244.0.7:50332 - 30428 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000174168s
	[INFO] 10.244.0.7:50332 - 59077 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.001490466s
	[INFO] 10.244.0.7:50332 - 13205 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000831967s
	[INFO] 10.244.0.7:50332 - 1983 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000771396s
	[INFO] 10.244.0.7:50332 - 8292 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000180704s
	[INFO] 10.244.0.7:50332 - 10894 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00029735s
	[INFO] 10.244.0.7:50332 - 17436 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000105433s
	[INFO] 10.244.0.7:50332 - 10735 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000124054s
	
	
	==> describe nodes <==
	Name:               addons-819501
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-819501
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=addons-819501
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T05_29_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-819501
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 05:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-819501
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 05:42:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 05:37:43 +0000   Wed, 10 Dec 2025 05:29:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 05:37:43 +0000   Wed, 10 Dec 2025 05:29:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 05:37:43 +0000   Wed, 10 Dec 2025 05:29:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 05:37:43 +0000   Wed, 10 Dec 2025 05:29:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.227
	  Hostname:    addons-819501
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba6b4ebfa05046a9ba182a04e8831219
	  System UUID:                ba6b4ebf-a050-46a9-ba18-2a04e8831219
	  Boot ID:                    216e7b9f-8c01-493d-bad4-cf3938ee1b07
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  default                     hello-world-app-5d498dc89-t67b9                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 amd-gpu-device-plugin-xwmk6                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-lwtl7                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-819501                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-819501                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-819501                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-ngpzv                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-819501                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-6b586f9694-lkhvn                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-proxy-25pr7                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-648f6765c9-vsz96                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-819501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-819501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-819501 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-819501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-819501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-819501 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-819501 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-819501 event: Registered Node addons-819501 in Controller
	
	
	==> dmesg <==
	[  +0.000055] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.804071] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.010431] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.587817] kauditd_printk_skb: 11 callbacks suppressed
	[Dec10 05:36] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.071354] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.011099] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.409106] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.223792] kauditd_printk_skb: 72 callbacks suppressed
	[  +0.701216] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.750190] kauditd_printk_skb: 20 callbacks suppressed
	[  +4.999003] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.624936] kauditd_printk_skb: 22 callbacks suppressed
	[Dec10 05:37] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.180251] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.341207] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000049] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.855665] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.223501] kauditd_printk_skb: 127 callbacks suppressed
	[Dec10 05:38] kauditd_printk_skb: 15 callbacks suppressed
	[Dec10 05:39] kauditd_printk_skb: 9 callbacks suppressed
	[ +13.888203] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.052746] kauditd_printk_skb: 46 callbacks suppressed
	[Dec10 05:40] kauditd_printk_skb: 19 callbacks suppressed
	[Dec10 05:42] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [f05e43ec5e70f9d6ff09ee4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f] <==
	{"level":"warn","ts":"2025-12-10T05:30:28.890002Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.047857ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:30:28.890050Z","caller":"traceutil/trace.go:172","msg":"trace[629805527] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:992; }","duration":"200.098103ms","start":"2025-12-10T05:30:28.689944Z","end":"2025-12-10T05:30:28.890042Z","steps":["trace[629805527] 'agreement among raft nodes before linearized reading'  (duration: 200.027767ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:31:14.011871Z","caller":"traceutil/trace.go:172","msg":"trace[1685321480] transaction","detail":"{read_only:false; response_revision:1115; number_of_response:1; }","duration":"232.957389ms","start":"2025-12-10T05:31:13.778902Z","end":"2025-12-10T05:31:14.011860Z","steps":["trace[1685321480] 'process raft request'  (duration: 232.862929ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:31:14.011968Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.972115ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:31:14.011995Z","caller":"traceutil/trace.go:172","msg":"trace[2111277061] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1115; }","duration":"108.022783ms","start":"2025-12-10T05:31:13.903967Z","end":"2025-12-10T05:31:14.011990Z","steps":["trace[2111277061] 'agreement among raft nodes before linearized reading'  (duration: 107.945145ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:31:14.011799Z","caller":"traceutil/trace.go:172","msg":"trace[1752202248] linearizableReadLoop","detail":"{readStateIndex:1150; appliedIndex:1150; }","duration":"107.780261ms","start":"2025-12-10T05:31:13.903991Z","end":"2025-12-10T05:31:14.011771Z","steps":["trace[1752202248] 'read index received'  (duration: 107.774382ms)","trace[1752202248] 'applied index is now lower than readState.Index'  (duration: 5.187µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T05:32:25.305922Z","caller":"traceutil/trace.go:172","msg":"trace[1251573471] linearizableReadLoop","detail":"{readStateIndex:1319; appliedIndex:1319; }","duration":"121.87535ms","start":"2025-12-10T05:32:25.184006Z","end":"2025-12-10T05:32:25.305882Z","steps":["trace[1251573471] 'read index received'  (duration: 121.869174ms)","trace[1251573471] 'applied index is now lower than readState.Index'  (duration: 5.013µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:32:25.306150Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.098039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:32:25.306183Z","caller":"traceutil/trace.go:172","msg":"trace[135230896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1269; }","duration":"122.173364ms","start":"2025-12-10T05:32:25.184001Z","end":"2025-12-10T05:32:25.306174Z","steps":["trace[135230896] 'agreement among raft nodes before linearized reading'  (duration: 122.073487ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:32:25.306204Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.992998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:32:25.306322Z","caller":"traceutil/trace.go:172","msg":"trace[181149218] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1269; }","duration":"122.021433ms","start":"2025-12-10T05:32:25.184200Z","end":"2025-12-10T05:32:25.306222Z","steps":["trace[181149218] 'agreement among raft nodes before linearized reading'  (duration: 121.979708ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:32:25.306014Z","caller":"traceutil/trace.go:172","msg":"trace[1094354552] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"172.845984ms","start":"2025-12-10T05:32:25.133156Z","end":"2025-12-10T05:32:25.306002Z","steps":["trace[1094354552] 'process raft request'  (duration: 172.745468ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:32:25.306478Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.446698ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:32:25.306494Z","caller":"traceutil/trace.go:172","msg":"trace[180843504] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1269; }","duration":"112.467638ms","start":"2025-12-10T05:32:25.194022Z","end":"2025-12-10T05:32:25.306490Z","steps":["trace[180843504] 'agreement among raft nodes before linearized reading'  (duration: 112.436939ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:32:38.226558Z","caller":"traceutil/trace.go:172","msg":"trace[587922016] transaction","detail":"{read_only:false; response_revision:1324; number_of_response:1; }","duration":"227.161495ms","start":"2025-12-10T05:32:37.999377Z","end":"2025-12-10T05:32:38.226539Z","steps":["trace[587922016] 'process raft request'  (duration: 226.986721ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:36:40.519985Z","caller":"traceutil/trace.go:172","msg":"trace[1445196895] linearizableReadLoop","detail":"{readStateIndex:1981; appliedIndex:1981; }","duration":"234.877958ms","start":"2025-12-10T05:36:40.285064Z","end":"2025-12-10T05:36:40.519942Z","steps":["trace[1445196895] 'read index received'  (duration: 234.872722ms)","trace[1445196895] 'applied index is now lower than readState.Index'  (duration: 4.502µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:36:40.520340Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"235.154055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:36:40.520246Z","caller":"traceutil/trace.go:172","msg":"trace[297048297] transaction","detail":"{read_only:false; response_revision:1873; number_of_response:1; }","duration":"262.670866ms","start":"2025-12-10T05:36:40.257562Z","end":"2025-12-10T05:36:40.520233Z","steps":["trace[297048297] 'process raft request'  (duration: 262.516618ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:36:40.520381Z","caller":"traceutil/trace.go:172","msg":"trace[281380984] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1872; }","duration":"235.311996ms","start":"2025-12-10T05:36:40.285059Z","end":"2025-12-10T05:36:40.520371Z","steps":["trace[281380984] 'agreement among raft nodes before linearized reading'  (duration: 235.122733ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:36:40.520595Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.849704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:36:40.520613Z","caller":"traceutil/trace.go:172","msg":"trace[1952149145] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1873; }","duration":"150.87196ms","start":"2025-12-10T05:36:40.369736Z","end":"2025-12-10T05:36:40.520608Z","steps":["trace[1952149145] 'agreement among raft nodes before linearized reading'  (duration: 150.835739ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:36:55.773886Z","caller":"traceutil/trace.go:172","msg":"trace[380006586] transaction","detail":"{read_only:false; response_revision:1936; number_of_response:1; }","duration":"170.469213ms","start":"2025-12-10T05:36:55.603390Z","end":"2025-12-10T05:36:55.773859Z","steps":["trace[380006586] 'process raft request'  (duration: 169.093512ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:39:36.999194Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1514}
	{"level":"info","ts":"2025-12-10T05:39:37.126471Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1514,"took":"125.267941ms","hash":3277531955,"current-db-size-bytes":6422528,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":4313088,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2025-12-10T05:39:37.126557Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3277531955,"revision":1514,"compact-revision":-1}
	
	
	==> kernel <==
	 05:42:20 up 13 min,  0 users,  load average: 0.37, 0.60, 0.55
	Linux addons-819501 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a] <==
	E1210 05:30:25.400807       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 05:30:25.402016       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 05:30:25.766164       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 05:36:09.659812       1 conn.go:339] Error on socket receive: read tcp 192.168.50.227:8443->192.168.50.1:43450: use of closed network connection
	E1210 05:36:09.874699       1 conn.go:339] Error on socket receive: read tcp 192.168.50.227:8443->192.168.50.1:43486: use of closed network connection
	I1210 05:36:19.186903       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.102.154"}
	I1210 05:36:26.827800       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1210 05:37:05.234793       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1210 05:37:05.460234       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.93.167"}
	I1210 05:37:20.960481       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1210 05:37:37.370348       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.370420       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:37:37.400032       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.400095       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:37:37.440464       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.440520       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:37:37.462791       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.462856       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:37:37.487024       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.487088       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1210 05:37:38.441153       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1210 05:37:38.487618       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1210 05:37:38.518385       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1210 05:39:32.596673       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.118.237"}
	I1210 05:39:38.664950       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948] <==
	E1210 05:39:02.992372       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:39:02.993613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:39:28.832479       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:39:28.833625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:39:36.422519       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:39:36.423840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:39:37.275765       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:39:37.277570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1210 05:39:47.168907       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	E1210 05:40:26.694850       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:40:26.696003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:40:26.748489       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:40:26.749593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:40:37.079871       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:40:37.081093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:40:59.516011       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:40:59.517476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:41:06.818849       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:41:06.820228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:41:31.320466       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:41:31.322104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:41:41.609592       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:41:41.610718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:41:49.517214       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:41:49.518512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68] <==
	I1210 05:29:49.296578       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 05:29:49.398048       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 05:29:49.398087       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.227"]
	E1210 05:29:49.398156       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 05:29:49.810599       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 05:29:49.810670       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 05:29:49.810702       1 server_linux.go:132] "Using iptables Proxier"
	I1210 05:29:49.903119       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 05:29:49.904430       1 server.go:527] "Version info" version="v1.34.3"
	I1210 05:29:49.904446       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:29:49.914988       1 config.go:200] "Starting service config controller"
	I1210 05:29:49.915001       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 05:29:49.915020       1 config.go:106] "Starting endpoint slice config controller"
	I1210 05:29:49.915023       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 05:29:49.915033       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 05:29:49.915036       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 05:29:49.917483       1 config.go:309] "Starting node config controller"
	I1210 05:29:49.917744       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 05:29:49.917814       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 05:29:50.016054       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 05:29:50.016117       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 05:29:50.016149       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9] <==
	I1210 05:29:39.754860       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:29:39.763116       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:29:39.763203       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:29:39.764832       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 05:29:39.765071       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 05:29:39.765817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 05:29:39.769470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 05:29:39.772002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:29:39.772200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:29:39.772466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 05:29:39.772723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 05:29:39.773020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 05:29:39.773348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 05:29:39.773463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:29:39.773482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 05:29:39.773493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 05:29:39.777582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:29:39.777709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:29:39.777771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 05:29:39.778948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 05:29:39.779052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 05:29:39.779103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 05:29:39.779170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:29:39.779217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1210 05:29:41.063910       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 05:41:33 addons-819501 kubelet[2254]: I1210 05:41:33.854481    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-lkhvn" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:41:33 addons-819501 kubelet[2254]: E1210 05:41:33.856084    2254 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-lkhvn" podUID="0a8387d7-19c7-49cd-8425-48c60f2e70ae"
	Dec 10 05:41:42 addons-819501 kubelet[2254]: E1210 05:41:42.301922    2254 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345302301347964  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:41:42 addons-819501 kubelet[2254]: E1210 05:41:42.301968    2254 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345302301347964  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:41:44 addons-819501 kubelet[2254]: I1210 05:41:44.850375    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-lkhvn" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:41:44 addons-819501 kubelet[2254]: E1210 05:41:44.852619    2254 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-lkhvn" podUID="0a8387d7-19c7-49cd-8425-48c60f2e70ae"
	Dec 10 05:41:51 addons-819501 kubelet[2254]: I1210 05:41:51.850081    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-25pr7" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:41:52 addons-819501 kubelet[2254]: E1210 05:41:52.305476    2254 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345312305035279  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:41:52 addons-819501 kubelet[2254]: E1210 05:41:52.305499    2254 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345312305035279  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:41:55 addons-819501 kubelet[2254]: I1210 05:41:55.850691    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-lkhvn" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:41:55 addons-819501 kubelet[2254]: E1210 05:41:55.852075    2254 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-lkhvn" podUID="0a8387d7-19c7-49cd-8425-48c60f2e70ae"
	Dec 10 05:42:02 addons-819501 kubelet[2254]: E1210 05:42:02.308554    2254 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345322308167348  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:42:02 addons-819501 kubelet[2254]: E1210 05:42:02.308600    2254 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345322308167348  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:42:02 addons-819501 kubelet[2254]: I1210 05:42:02.850157    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:42:07 addons-819501 kubelet[2254]: I1210 05:42:07.850991    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-lkhvn" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:42:07 addons-819501 kubelet[2254]: E1210 05:42:07.852751    2254 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-lkhvn" podUID="0a8387d7-19c7-49cd-8425-48c60f2e70ae"
	Dec 10 05:42:12 addons-819501 kubelet[2254]: E1210 05:42:12.311950    2254 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345332311174140  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:42:12 addons-819501 kubelet[2254]: E1210 05:42:12.312338    2254 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345332311174140  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:42:17 addons-819501 kubelet[2254]: E1210 05:42:17.524914    2254 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Dec 10 05:42:17 addons-819501 kubelet[2254]: E1210 05:42:17.524987    2254 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Dec 10 05:42:17 addons-819501 kubelet[2254]: E1210 05:42:17.525087    2254 kuberuntime_manager.go:1449] "Unhandled Error" err="container hello-world-app start failed in pod hello-world-app-5d498dc89-t67b9_default(7f40dad2-5164-4575-a745-f97826e47fed): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 10 05:42:17 addons-819501 kubelet[2254]: E1210 05:42:17.525116    2254 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-t67b9" podUID="7f40dad2-5164-4575-a745-f97826e47fed"
	Dec 10 05:42:19 addons-819501 kubelet[2254]: I1210 05:42:19.118968    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/910c9402-5cd7-402a-8d7f-a1cdba98f550-data\") pod \"helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042\" (UID: \"910c9402-5cd7-402a-8d7f-a1cdba98f550\") " pod="local-path-storage/helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042"
	Dec 10 05:42:19 addons-819501 kubelet[2254]: I1210 05:42:19.119021    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs7xb\" (UniqueName: \"kubernetes.io/projected/910c9402-5cd7-402a-8d7f-a1cdba98f550-kube-api-access-bs7xb\") pod \"helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042\" (UID: \"910c9402-5cd7-402a-8d7f-a1cdba98f550\") " pod="local-path-storage/helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042"
	Dec 10 05:42:19 addons-819501 kubelet[2254]: I1210 05:42:19.119042    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/910c9402-5cd7-402a-8d7f-a1cdba98f550-script\") pod \"helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042\" (UID: \"910c9402-5cd7-402a-8d7f-a1cdba98f550\") " pod="local-path-storage/helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042"
	
	
	==> storage-provisioner [cb5e7c29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320] <==
	W1210 05:41:54.940801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:56.944106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:56.949927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:58.954752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:58.963096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:00.967462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:00.974048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:02.977513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:02.982841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:04.986762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:04.995045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:06.999077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:07.005914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:09.010613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:09.019212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:11.022528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:11.027920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:13.031968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:13.040740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:15.045602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:15.052834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:17.057027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:17.063001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:19.068242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:42:19.076173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-819501 -n addons-819501
helpers_test.go:270: (dbg) Run:  kubectl --context addons-819501 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-t67b9 test-local-path registry-6b586f9694-lkhvn helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-819501 describe pod hello-world-app-5d498dc89-t67b9 test-local-path registry-6b586f9694-lkhvn helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-819501 describe pod hello-world-app-5d498dc89-t67b9 test-local-path registry-6b586f9694-lkhvn helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042: exit status 1 (82.480365ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-t67b9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-819501/192.168.50.227
	Start Time:       Wed, 10 Dec 2025 05:39:32 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:           10.244.0.31
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bbf7r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bbf7r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m49s                default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-t67b9 to addons-819501
	  Warning  Failed     94s                  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    94s                  kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     94s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    79s (x2 over 2m48s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     4s (x2 over 94s)     kubelet            Error: ErrImagePull
	  Warning  Failed     4s                   kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v4jq9 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-v4jq9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-6b586f9694-lkhvn" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-819501 describe pod hello-world-app-5d498dc89-t67b9 test-local-path registry-6b586f9694-lkhvn helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable registry --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/Registry (362.92s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (159.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-819501 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-819501 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-819501 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [390b2e80-0538-4ebe-ae5c-2e24388c48e0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [390b2e80-0538-4ebe-ae5c-2e24388c48e0] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.004434329s
I1210 05:37:19.488864  247366 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-819501 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.848996907s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-819501 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.50.227
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-819501 -n addons-819501
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-819501 logs -n 25: (1.371620604s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-160810                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-160810 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-140393                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-140393 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-829998                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-829998 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-160810                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-160810 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ start   │ --download-only -p binary-mirror-177372 --alsologtostderr --binary-mirror http://127.0.0.1:39073 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-177372 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ -p binary-mirror-177372                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-177372 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ addons  │ disable dashboard -p addons-819501                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-819501                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ start   │ -p addons-819501 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:35 UTC │
	│ addons  │ addons-819501 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:35 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ enable headlamp -p addons-819501 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:37 UTC │
	│ ssh     │ addons-819501 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │                     │
	│ addons  │ addons-819501 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ addons  │ addons-819501 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ addons  │ addons-819501 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-819501                                                                                                                                                                                                                                                                                                                                                                                         │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ addons  │ addons-819501 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ ip      │ addons-819501 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:28:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:28:52.105088  248270 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:28:52.105179  248270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:52.105184  248270 out.go:374] Setting ErrFile to fd 2...
	I1210 05:28:52.105188  248270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:52.105358  248270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 05:28:52.105849  248270 out.go:368] Setting JSON to false
	I1210 05:28:52.106664  248270 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25879,"bootTime":1765318653,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:28:52.106723  248270 start.go:143] virtualization: kvm guest
	I1210 05:28:52.108609  248270 out.go:179] * [addons-819501] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:28:52.110355  248270 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:28:52.110393  248270 notify.go:221] Checking for updates...
	I1210 05:28:52.112643  248270 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:28:52.114625  248270 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 05:28:52.115949  248270 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:28:52.117122  248270 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:28:52.118420  248270 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:28:52.119836  248270 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:28:52.150913  248270 out.go:179] * Using the kvm2 driver based on user configuration
	I1210 05:28:52.152295  248270 start.go:309] selected driver: kvm2
	I1210 05:28:52.152312  248270 start.go:927] validating driver "kvm2" against <nil>
	I1210 05:28:52.152325  248270 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:28:52.153083  248270 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:28:52.153343  248270 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:28:52.153369  248270 cni.go:84] Creating CNI manager for ""
	I1210 05:28:52.153432  248270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:28:52.153449  248270 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 05:28:52.153504  248270 start.go:353] cluster config:
	{Name:addons-819501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1210 05:28:52.153618  248270 iso.go:125] acquiring lock: {Name:mkd598cf63ca899d26ff5ae5308f8a58215a80b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.155323  248270 out.go:179] * Starting "addons-819501" primary control-plane node in "addons-819501" cluster
	I1210 05:28:52.156436  248270 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 05:28:52.175813  248270 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1210 05:28:52.189575  248270 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 05:28:52.189954  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:52.189997  248270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/config.json ...
	I1210 05:28:52.190028  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/config.json: {Name:mk888fb8e14ee6a18b9f0bd32a9670b388cb1bbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:28:52.190232  248270 start.go:360] acquireMachinesLock for addons-819501: {Name:mk2161deb194f56aae2b0559c12fd0eb56fd317d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 05:28:52.190306  248270 start.go:364] duration metric: took 53.257µs to acquireMachinesLock for "addons-819501"
	I1210 05:28:52.190335  248270 start.go:93] Provisioning new machine with config: &{Name:addons-819501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:28:52.190423  248270 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 05:28:52.192350  248270 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1210 05:28:52.192578  248270 start.go:159] libmachine.API.Create for "addons-819501" (driver="kvm2")
	I1210 05:28:52.192614  248270 client.go:173] LocalClient.Create starting
	I1210 05:28:52.192740  248270 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem
	I1210 05:28:52.332984  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:52.335651  248270 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem
	I1210 05:28:52.470748  248270 main.go:143] libmachine: creating domain...
	I1210 05:28:52.470771  248270 main.go:143] libmachine: creating network...
	I1210 05:28:52.472517  248270 main.go:143] libmachine: found existing default network
	I1210 05:28:52.472684  248270 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 05:28:52.473207  248270 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:1f:09} reservation:<nil>}
	I1210 05:28:52.473578  248270 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cefc30}
	I1210 05:28:52.473663  248270 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-819501</name>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 05:28:52.479464  248270 main.go:143] libmachine: creating private network mk-addons-819501 192.168.50.0/24...
	I1210 05:28:52.482923  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:52.558222  248270 main.go:143] libmachine: private network mk-addons-819501 192.168.50.0/24 created
	I1210 05:28:52.558553  248270 main.go:143] libmachine: <network>
	  <name>mk-addons-819501</name>
	  <uuid>c2bdce80-7332-4fd7-b021-02079a969afe</uuid>
	  <bridge name='virbr2' stp='on' delay='0'/>
	  <mac address='52:54:00:ec:cf:14'/>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 05:28:52.558584  248270 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501 ...
	I1210 05:28:52.558604  248270 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22094-243461/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1210 05:28:52.558615  248270 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:28:52.558685  248270 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22094-243461/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22094-243461/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1210 05:28:52.641062  248270 cache.go:107] acquiring lock: {Name:mk4f601fcccaa8421d9a471640a96feb5df57ae1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641095  248270 cache.go:107] acquiring lock: {Name:mka12e8a345a6dc24c0da40f31d69a169b73fc8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641054  248270 cache.go:107] acquiring lock: {Name:mk8a2b7c7103ad9b74ce0f1af971a5d8da1c8f6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641134  248270 cache.go:107] acquiring lock: {Name:mk72740fe8a4d4eb6e3ad18d28ff308f87f86eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641137  248270 cache.go:107] acquiring lock: {Name:mkc558d20fc07b350030510216ebcf1d2df4b57b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641148  248270 cache.go:107] acquiring lock: {Name:mkca46313d0e39171add494fd1f96b98422fb511 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641203  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 05:28:52.641216  248270 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 161.132µs
	I1210 05:28:52.641226  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:28:52.641234  248270 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 05:28:52.641227  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 05:28:52.641058  248270 cache.go:107] acquiring lock: {Name:mkc561f0208895e5efe372932a5a00136ddcb2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641242  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 05:28:52.641248  248270 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 118.214µs
	I1210 05:28:52.641254  248270 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 166.04µs
	I1210 05:28:52.641260  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 05:28:52.641263  248270 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 05:28:52.641265  248270 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 05:28:52.641279  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 05:28:52.641279  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 05:28:52.641279  248270 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 163.887µs
	I1210 05:28:52.641287  248270 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 246.182µs
	I1210 05:28:52.641301  248270 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 05:28:52.641294  248270 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 05:28:52.641293  248270 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 160.457µs
	I1210 05:28:52.641270  248270 cache.go:107] acquiring lock: {Name:mk2d5c3355eb914434f77fe8a549e7e27d61d8ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641237  248270 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 216.984µs
	I1210 05:28:52.641445  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:28:52.641457  248270 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 230.713µs
	I1210 05:28:52.641467  248270 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:28:52.641313  248270 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 05:28:52.641463  248270 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:28:52.641504  248270 cache.go:87] Successfully saved all images to host disk.
	I1210 05:28:52.824390  248270 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa...
	I1210 05:28:52.868316  248270 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/addons-819501.rawdisk...
	I1210 05:28:52.868386  248270 main.go:143] libmachine: Writing magic tar header
	I1210 05:28:52.868426  248270 main.go:143] libmachine: Writing SSH key tar header
	I1210 05:28:52.868507  248270 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501 ...
	I1210 05:28:52.868575  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501
	I1210 05:28:52.868607  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501 (perms=drwx------)
	I1210 05:28:52.868619  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube/machines
	I1210 05:28:52.868633  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube/machines (perms=drwxr-xr-x)
	I1210 05:28:52.868644  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:28:52.868656  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube (perms=drwxr-xr-x)
	I1210 05:28:52.868664  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461
	I1210 05:28:52.868674  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461 (perms=drwxrwxr-x)
	I1210 05:28:52.868688  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1210 05:28:52.868698  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 05:28:52.868706  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1210 05:28:52.868716  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 05:28:52.868725  248270 main.go:143] libmachine: checking permissions on dir: /home
	I1210 05:28:52.868733  248270 main.go:143] libmachine: skipping /home - not owner
	I1210 05:28:52.868738  248270 main.go:143] libmachine: defining domain...
	I1210 05:28:52.870229  248270 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-819501</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/addons-819501.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-819501'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1210 05:28:52.875406  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:bd:43:bd in network default
	I1210 05:28:52.876104  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:52.876118  248270 main.go:143] libmachine: starting domain...
	I1210 05:28:52.876122  248270 main.go:143] libmachine: ensuring networks are active...
	I1210 05:28:52.877154  248270 main.go:143] libmachine: Ensuring network default is active
	I1210 05:28:52.877579  248270 main.go:143] libmachine: Ensuring network mk-addons-819501 is active
	I1210 05:28:52.878149  248270 main.go:143] libmachine: getting domain XML...
	I1210 05:28:52.879236  248270 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-819501</name>
	  <uuid>ba6b4ebf-a050-46a9-ba18-2a04e8831219</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/addons-819501.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:0b:26:32'/>
	      <source network='mk-addons-819501'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:bd:43:bd'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1210 05:28:54.168962  248270 main.go:143] libmachine: waiting for domain to start...
	I1210 05:28:54.170546  248270 main.go:143] libmachine: domain is now running
	I1210 05:28:54.170570  248270 main.go:143] libmachine: waiting for IP...
	I1210 05:28:54.171414  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:54.172058  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:54.172073  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:54.172400  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:54.172444  248270 retry.go:31] will retry after 204.150227ms: waiting for domain to come up
	I1210 05:28:54.378048  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:54.378807  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:54.378824  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:54.379142  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:54.379187  248270 retry.go:31] will retry after 336.586353ms: waiting for domain to come up
	I1210 05:28:54.717782  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:54.718612  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:54.718630  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:54.719044  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:54.719085  248270 retry.go:31] will retry after 427.236784ms: waiting for domain to come up
	I1210 05:28:55.147903  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:55.148695  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:55.148717  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:55.149130  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:55.149170  248270 retry.go:31] will retry after 496.970231ms: waiting for domain to come up
	I1210 05:28:55.648236  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:55.648976  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:55.648993  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:55.649385  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:55.649419  248270 retry.go:31] will retry after 685.299323ms: waiting for domain to come up
	I1210 05:28:56.336314  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:56.336946  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:56.336962  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:56.337319  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:56.337366  248270 retry.go:31] will retry after 806.287256ms: waiting for domain to come up
	I1210 05:28:57.145591  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:57.146271  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:57.146294  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:57.146653  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:57.146702  248270 retry.go:31] will retry after 821.107194ms: waiting for domain to come up
	I1210 05:28:57.969805  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:57.970505  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:57.970524  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:57.970852  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:57.970909  248270 retry.go:31] will retry after 1.109916147s: waiting for domain to come up
	I1210 05:28:59.082244  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:59.082858  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:59.082893  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:59.083281  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:59.083325  248270 retry.go:31] will retry after 1.728427418s: waiting for domain to come up
	I1210 05:29:00.814529  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:00.815344  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:29:00.815363  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:29:00.815773  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:29:00.815820  248270 retry.go:31] will retry after 1.517793987s: waiting for domain to come up
	I1210 05:29:02.335622  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:02.336400  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:29:02.336422  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:29:02.336895  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:29:02.336945  248270 retry.go:31] will retry after 2.6142192s: waiting for domain to come up
	I1210 05:29:04.954635  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:04.955354  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:29:04.955379  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:29:04.955714  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:29:04.955755  248270 retry.go:31] will retry after 2.739648926s: waiting for domain to come up
	I1210 05:29:07.696760  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:07.697527  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:29:07.697545  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:29:07.697920  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:29:07.697964  248270 retry.go:31] will retry after 2.936432251s: waiting for domain to come up
	I1210 05:29:10.638105  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.638831  248270 main.go:143] libmachine: domain addons-819501 has current primary IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.638865  248270 main.go:143] libmachine: found domain IP: 192.168.50.227
	I1210 05:29:10.638889  248270 main.go:143] libmachine: reserving static IP address...
	I1210 05:29:10.639331  248270 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-819501", mac: "52:54:00:0b:26:32", ip: "192.168.50.227"} in network mk-addons-819501
	I1210 05:29:10.837647  248270 main.go:143] libmachine: reserved static IP address 192.168.50.227 for domain addons-819501
	I1210 05:29:10.837674  248270 main.go:143] libmachine: waiting for SSH...
	I1210 05:29:10.837683  248270 main.go:143] libmachine: Getting to WaitForSSH function...
	I1210 05:29:10.841998  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.842734  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:10.842776  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.843052  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:10.843817  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:10.843851  248270 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1210 05:29:10.953140  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:29:10.953577  248270 main.go:143] libmachine: domain creation complete
	I1210 05:29:10.955178  248270 machine.go:94] provisionDockerMachine start ...
	I1210 05:29:10.957672  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.958111  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:10.958134  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.958334  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:10.958541  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:10.958552  248270 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:29:11.063469  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 05:29:11.063510  248270 buildroot.go:166] provisioning hostname "addons-819501"
	I1210 05:29:11.066851  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.067359  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.067386  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.067581  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:11.067818  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:11.067836  248270 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-819501 && echo "addons-819501" | sudo tee /etc/hostname
	I1210 05:29:11.191238  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-819501
	
	I1210 05:29:11.194283  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.194634  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.194662  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.194813  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:11.195030  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:11.195045  248270 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-819501' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-819501/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-819501' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:29:11.310332  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:29:11.310364  248270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22094-243461/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-243461/.minikube}
	I1210 05:29:11.310433  248270 buildroot.go:174] setting up certificates
	I1210 05:29:11.310448  248270 provision.go:84] configureAuth start
	I1210 05:29:11.314015  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.314505  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.314528  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.317045  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.317504  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.317533  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.317778  248270 provision.go:143] copyHostCerts
	I1210 05:29:11.317897  248270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/ca.pem (1082 bytes)
	I1210 05:29:11.318079  248270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/cert.pem (1123 bytes)
	I1210 05:29:11.318163  248270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/key.pem (1675 bytes)
	I1210 05:29:11.318221  248270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem org=jenkins.addons-819501 san=[127.0.0.1 192.168.50.227 addons-819501 localhost minikube]
	I1210 05:29:11.380449  248270 provision.go:177] copyRemoteCerts
	I1210 05:29:11.380516  248270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:29:11.383191  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.383530  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.383557  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.383724  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:11.468790  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 05:29:11.501764  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 05:29:11.536197  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:29:11.569192  248270 provision.go:87] duration metric: took 258.704158ms to configureAuth
	I1210 05:29:11.569224  248270 buildroot.go:189] setting minikube options for container-runtime
	I1210 05:29:11.569456  248270 config.go:182] Loaded profile config "addons-819501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:11.572768  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.573263  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.573289  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.573596  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:11.573815  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:11.573830  248270 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 05:29:11.833231  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 05:29:11.833265  248270 machine.go:97] duration metric: took 878.061601ms to provisionDockerMachine
	I1210 05:29:11.833278  248270 client.go:176] duration metric: took 19.640654056s to LocalClient.Create
	I1210 05:29:11.833288  248270 start.go:167] duration metric: took 19.640714044s to libmachine.API.Create "addons-819501"
	I1210 05:29:11.833300  248270 start.go:293] postStartSetup for "addons-819501" (driver="kvm2")
	I1210 05:29:11.833326  248270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:29:11.833399  248270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:29:11.836778  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.837269  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.837308  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.837481  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:11.922086  248270 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:29:11.927731  248270 info.go:137] Remote host: Buildroot 2025.02
	I1210 05:29:11.927773  248270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-243461/.minikube/addons for local assets ...
	I1210 05:29:11.927871  248270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-243461/.minikube/files for local assets ...
	I1210 05:29:11.927934  248270 start.go:296] duration metric: took 94.612566ms for postStartSetup
	I1210 05:29:11.931495  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.931980  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.932019  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.932307  248270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/config.json ...
	I1210 05:29:11.932540  248270 start.go:128] duration metric: took 19.74210366s to createHost
	I1210 05:29:11.934767  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.935144  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.935166  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.935324  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:11.935513  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:11.935522  248270 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 05:29:12.046287  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765344552.004969125
	
	I1210 05:29:12.046317  248270 fix.go:216] guest clock: 1765344552.004969125
	I1210 05:29:12.046328  248270 fix.go:229] Guest: 2025-12-10 05:29:12.004969125 +0000 UTC Remote: 2025-12-10 05:29:11.932556032 +0000 UTC m=+19.877288748 (delta=72.413093ms)
	I1210 05:29:12.046353  248270 fix.go:200] guest clock delta is within tolerance: 72.413093ms
	I1210 05:29:12.046359  248270 start.go:83] releasing machines lock for "addons-819501", held for 19.85604026s
	I1210 05:29:12.049360  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.049703  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:12.049730  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.050472  248270 ssh_runner.go:195] Run: cat /version.json
	I1210 05:29:12.050505  248270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:29:12.053634  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.054149  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:12.054174  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.054210  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.054370  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:12.054796  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:12.054838  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.055088  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:12.153950  248270 ssh_runner.go:195] Run: systemctl --version
	I1210 05:29:12.161170  248270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 05:29:12.329523  248270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:29:12.337761  248270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:29:12.337846  248270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:29:12.363822  248270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 05:29:12.363854  248270 start.go:496] detecting cgroup driver to use...
	I1210 05:29:12.363953  248270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:29:12.391660  248270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:29:12.411256  248270 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:29:12.411332  248270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:29:12.430231  248270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:29:12.447813  248270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:29:12.603440  248270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:29:12.827553  248270 docker.go:234] disabling docker service ...
	I1210 05:29:12.827647  248270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:29:12.846039  248270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:29:12.862361  248270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:29:13.020176  248270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:29:13.164368  248270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:29:13.182024  248270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:29:13.206545  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:13.357154  248270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 05:29:13.357230  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.371402  248270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 05:29:13.371473  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.385362  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.398751  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.412259  248270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:29:13.426396  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.440016  248270 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.462382  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.476454  248270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:29:13.487470  248270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 05:29:13.487559  248270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 05:29:13.511008  248270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:29:13.525764  248270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:13.668661  248270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 05:29:13.804341  248270 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 05:29:13.804461  248270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 05:29:13.811120  248270 start.go:564] Will wait 60s for crictl version
	I1210 05:29:13.811237  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:13.816221  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 05:29:13.855240  248270 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 05:29:13.855361  248270 ssh_runner.go:195] Run: crio --version
	I1210 05:29:13.886038  248270 ssh_runner.go:195] Run: crio --version
	I1210 05:29:13.919951  248270 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1210 05:29:13.923902  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:13.924339  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:13.924363  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:13.924587  248270 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1210 05:29:13.929723  248270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:29:13.945980  248270 kubeadm.go:884] updating cluster {Name:addons-819501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
3 ClusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.227 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:29:13.946170  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:14.110289  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:14.252203  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:14.408812  248270 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 05:29:14.408919  248270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:29:14.443222  248270 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1210 05:29:14.443255  248270 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 05:29:14.443321  248270 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:14.443333  248270 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.443375  248270 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.443403  248270 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.443402  248270 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 05:29:14.443464  248270 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.443462  248270 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.443481  248270 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.444803  248270 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.444813  248270 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.444829  248270 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:14.444808  248270 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.444831  248270 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.444830  248270 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.444830  248270 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 05:29:14.444820  248270 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.586669  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.587332  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.588817  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.594403  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.600393  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.610504  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 05:29:14.612226  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.715068  248270 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 05:29:14.715118  248270 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.715173  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.779435  248270 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 05:29:14.779487  248270 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.779435  248270 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 05:29:14.779538  248270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.779539  248270 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 05:29:14.779571  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.779582  248270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.779618  248270 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 05:29:14.779660  248270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.779708  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.779715  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.779550  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.787172  248270 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 05:29:14.787221  248270 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 05:29:14.787237  248270 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 05:29:14.787275  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.787288  248270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.787332  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.787336  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.791460  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.791483  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.791502  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.791543  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.866781  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:14.866836  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.866853  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.883432  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.892951  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.893016  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.912927  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.976361  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:14.984059  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:15.003284  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:15.029014  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:15.048649  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:15.048661  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:15.048727  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:15.116050  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:15.143539  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:15.143601  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1210 05:29:15.143736  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:15.173190  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3
	I1210 05:29:15.173206  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3
	I1210 05:29:15.173333  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:15.173334  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:15.188713  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1210 05:29:15.188872  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:15.194426  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3
	I1210 05:29:15.194565  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:15.218546  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1210 05:29:15.218551  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 05:29:15.218634  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 05:29:15.218700  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:15.238721  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3
	I1210 05:29:15.238751  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 05:29:15.238779  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 05:29:15.238854  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 05:29:15.238898  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 05:29:15.238941  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 05:29:15.238975  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 05:29:15.238991  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 05:29:15.238858  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:15.239079  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 05:29:15.244693  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 05:29:15.244738  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 05:29:15.336790  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 05:29:15.336841  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 05:29:15.374498  248270 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:15.374589  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:15.442120  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:15.859327  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1210 05:29:15.859390  248270 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:15.859423  248270 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 05:29:15.859450  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:15.859471  248270 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:15.859541  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:17.763703  248270 ssh_runner.go:235] Completed: which crictl: (1.904127102s)
	I1210 05:29:17.763747  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3: (1.904260399s)
	I1210 05:29:17.763776  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 from cache
	I1210 05:29:17.763799  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:17.763815  248270 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:17.763860  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:17.801244  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:19.427418  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3: (1.663530033s)
	I1210 05:29:19.427461  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 from cache
	I1210 05:29:19.427466  248270 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.626181448s)
	I1210 05:29:19.427490  248270 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:19.427547  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:19.427548  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:21.515976  248270 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.088322228s)
	I1210 05:29:21.516048  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 05:29:21.515979  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.088327761s)
	I1210 05:29:21.516139  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:21.516152  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1210 05:29:21.516199  248270 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:21.516255  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:21.521779  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 05:29:21.521829  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 05:29:23.716371  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (2.200087663s)
	I1210 05:29:23.716404  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1210 05:29:23.716440  248270 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:23.716492  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:25.782777  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3: (2.066253017s)
	I1210 05:29:25.782824  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 from cache
	I1210 05:29:25.782859  248270 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:25.782943  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:27.253133  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3: (1.47015767s)
	I1210 05:29:27.253186  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 from cache
	I1210 05:29:27.253222  248270 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:27.253296  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:27.900792  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 05:29:27.900866  248270 cache_images.go:125] Successfully loaded all cached images
	I1210 05:29:27.900893  248270 cache_images.go:94] duration metric: took 13.457620664s to LoadCachedImages
	I1210 05:29:27.900927  248270 kubeadm.go:935] updating node { 192.168.50.227 8443 v1.34.3 crio true true} ...
	I1210 05:29:27.901107  248270 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-819501 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:29:27.901270  248270 ssh_runner.go:195] Run: crio config
	I1210 05:29:27.952088  248270 cni.go:84] Creating CNI manager for ""
	I1210 05:29:27.952115  248270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:29:27.952136  248270 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:29:27.952158  248270 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.227 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-819501 NodeName:addons-819501 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:29:27.952294  248270 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-819501"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.227"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.227"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:29:27.952375  248270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 05:29:27.965806  248270 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 05:29:27.965903  248270 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 05:29:27.978340  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:27.978340  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 05:29:27.978345  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 05:29:27.978458  248270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:29:27.978469  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 05:29:27.978552  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 05:29:27.998011  248270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 05:29:27.998043  248270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 05:29:27.998018  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 05:29:27.998067  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 05:29:27.998069  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 05:29:28.014708  248270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 05:29:28.014787  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 05:29:28.819394  248270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:29:28.832094  248270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1210 05:29:28.854035  248270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 05:29:28.875757  248270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1210 05:29:28.897490  248270 ssh_runner.go:195] Run: grep 192.168.50.227	control-plane.minikube.internal$ /etc/hosts
	I1210 05:29:28.902042  248270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:29:28.918543  248270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:29.065436  248270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:29:29.106997  248270 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501 for IP: 192.168.50.227
	I1210 05:29:29.107026  248270 certs.go:195] generating shared ca certs ...
	I1210 05:29:29.107047  248270 certs.go:227] acquiring lock for ca certs: {Name:mk2c8c8bbc628186be8cfd9c613269482a34a061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.107244  248270 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key
	I1210 05:29:29.260185  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt ...
	I1210 05:29:29.260226  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt: {Name:mk7e3ea493469b63ffe73a3fd5c0aebe67cc96c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.260418  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key ...
	I1210 05:29:29.260430  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key: {Name:mk18d206c401766c525db7646d9b50127ae5a4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.260509  248270 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key
	I1210 05:29:29.303788  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt ...
	I1210 05:29:29.303818  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt: {Name:mk93a704a340d2989dfaa2c6ae18dd0ded5b740c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.304005  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key ...
	I1210 05:29:29.304017  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key: {Name:mk52708f456900179c4e21317e6ee01f1f662a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.304092  248270 certs.go:257] generating profile certs ...
	I1210 05:29:29.304158  248270 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.key
	I1210 05:29:29.304173  248270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt with IP's: []
	I1210 05:29:29.373028  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt ...
	I1210 05:29:29.373060  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: {Name:mkacd1d17bdb9699db5acb0deccedf4b963e9627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.373247  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.key ...
	I1210 05:29:29.373259  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.key: {Name:mkc8bb793a8aba8601b09fb6b4c6b561546e1716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.373344  248270 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key.b2effa21
	I1210 05:29:29.373366  248270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt.b2effa21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.227]
	I1210 05:29:29.412783  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt.b2effa21 ...
	I1210 05:29:29.412819  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt.b2effa21: {Name:mkbecca211b22f296a63bf12c0f8d6348e074d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.413027  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key.b2effa21 ...
	I1210 05:29:29.413042  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key.b2effa21: {Name:mkd21c7f030b36e5b0f136cec809fcc4792c4753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.413124  248270 certs.go:382] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt.b2effa21 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt
	I1210 05:29:29.413195  248270 certs.go:386] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key.b2effa21 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key
	I1210 05:29:29.413246  248270 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.key
	I1210 05:29:29.413264  248270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.crt with IP's: []
	I1210 05:29:29.588512  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.crt ...
	I1210 05:29:29.588545  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.crt: {Name:mkbfd2473bf6ad2df18575d3c1713540ff713d7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.588726  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.key ...
	I1210 05:29:29.588740  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.key: {Name:mk5d55c2451593ca28ccc38ada487efa06a43ea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.588942  248270 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:29:29.588986  248270 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem (1082 bytes)
	I1210 05:29:29.589013  248270 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:29:29.589037  248270 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem (1675 bytes)
	I1210 05:29:29.589603  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:29:29.623229  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:29:29.655622  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:29:29.688080  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 05:29:29.719938  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 05:29:29.752646  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 05:29:29.787596  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:29:29.822559  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 05:29:29.861697  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:29:29.893727  248270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:29:29.916020  248270 ssh_runner.go:195] Run: openssl version
	I1210 05:29:29.923087  248270 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:29.937161  248270 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:29:29.950819  248270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:29.956621  248270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:29.956683  248270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:29.964340  248270 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:29:29.977116  248270 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 05:29:29.989798  248270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:29:29.994829  248270 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 05:29:29.994916  248270 kubeadm.go:401] StartCluster: {Name:addons-819501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 C
lusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.227 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:29:29.995012  248270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:29:29.995077  248270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:29:30.033996  248270 cri.go:89] found id: ""
	I1210 05:29:30.034077  248270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:29:30.047749  248270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:29:30.061245  248270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:29:30.075038  248270 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:29:30.075063  248270 kubeadm.go:158] found existing configuration files:
	
	I1210 05:29:30.075128  248270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 05:29:30.087377  248270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:29:30.087446  248270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:29:30.100015  248270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 05:29:30.112415  248270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:29:30.112501  248270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:29:30.125599  248270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 05:29:30.137858  248270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:29:30.137955  248270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:29:30.150895  248270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 05:29:30.162780  248270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:29:30.162853  248270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:29:30.175318  248270 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 05:29:30.340910  248270 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 05:29:42.405330  248270 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 05:29:42.405415  248270 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:29:42.405523  248270 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:29:42.405657  248270 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:29:42.405780  248270 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:29:42.405921  248270 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:29:42.407664  248270 out.go:252]   - Generating certificates and keys ...
	I1210 05:29:42.407781  248270 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:29:42.407857  248270 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:29:42.407979  248270 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 05:29:42.408061  248270 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 05:29:42.408157  248270 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 05:29:42.408230  248270 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 05:29:42.408313  248270 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 05:29:42.408455  248270 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-819501 localhost] and IPs [192.168.50.227 127.0.0.1 ::1]
	I1210 05:29:42.408531  248270 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 05:29:42.408656  248270 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-819501 localhost] and IPs [192.168.50.227 127.0.0.1 ::1]
	I1210 05:29:42.408732  248270 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 05:29:42.408789  248270 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 05:29:42.408829  248270 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 05:29:42.408896  248270 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:29:42.408941  248270 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:29:42.409022  248270 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:29:42.409106  248270 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:29:42.409190  248270 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:29:42.409293  248270 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:29:42.409407  248270 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:29:42.409507  248270 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:29:42.411069  248270 out.go:252]   - Booting up control plane ...
	I1210 05:29:42.411187  248270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:29:42.411282  248270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:29:42.411391  248270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:29:42.411501  248270 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:29:42.411592  248270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:29:42.411684  248270 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:29:42.411786  248270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:29:42.411843  248270 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:29:42.412015  248270 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:29:42.412159  248270 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 05:29:42.412277  248270 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002203089s
	I1210 05:29:42.412416  248270 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 05:29:42.412493  248270 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.50.227:8443/livez
	I1210 05:29:42.412581  248270 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 05:29:42.412660  248270 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 05:29:42.412738  248270 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.103693537s
	I1210 05:29:42.412795  248270 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.793585918s
	I1210 05:29:42.412851  248270 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001380192s
	I1210 05:29:42.412969  248270 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 05:29:42.413101  248270 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 05:29:42.413187  248270 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 05:29:42.413353  248270 kubeadm.go:319] [mark-control-plane] Marking the node addons-819501 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 05:29:42.413409  248270 kubeadm.go:319] [bootstrap-token] Using token: ifaxfb.g6s3du0ko87s83xe
	I1210 05:29:42.415656  248270 out.go:252]   - Configuring RBAC rules ...
	I1210 05:29:42.415753  248270 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 05:29:42.415838  248270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 05:29:42.415978  248270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 05:29:42.416146  248270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 05:29:42.416292  248270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 05:29:42.416410  248270 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 05:29:42.416562  248270 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 05:29:42.416613  248270 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 05:29:42.416653  248270 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 05:29:42.416659  248270 kubeadm.go:319] 
	I1210 05:29:42.416721  248270 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 05:29:42.416730  248270 kubeadm.go:319] 
	I1210 05:29:42.416794  248270 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 05:29:42.416799  248270 kubeadm.go:319] 
	I1210 05:29:42.416820  248270 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 05:29:42.416871  248270 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 05:29:42.416929  248270 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 05:29:42.416935  248270 kubeadm.go:319] 
	I1210 05:29:42.416983  248270 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 05:29:42.416988  248270 kubeadm.go:319] 
	I1210 05:29:42.417031  248270 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 05:29:42.417039  248270 kubeadm.go:319] 
	I1210 05:29:42.417087  248270 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 05:29:42.417155  248270 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 05:29:42.417214  248270 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 05:29:42.417227  248270 kubeadm.go:319] 
	I1210 05:29:42.417300  248270 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 05:29:42.417370  248270 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 05:29:42.417376  248270 kubeadm.go:319] 
	I1210 05:29:42.417457  248270 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ifaxfb.g6s3du0ko87s83xe \
	I1210 05:29:42.417548  248270 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:3fdb6ac7c26d35942b0591aa9d148ccb3b5b969098be653ce61486bf954c5d36 \
	I1210 05:29:42.417568  248270 kubeadm.go:319] 	--control-plane 
	I1210 05:29:42.417576  248270 kubeadm.go:319] 
	I1210 05:29:42.417649  248270 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 05:29:42.417655  248270 kubeadm.go:319] 
	I1210 05:29:42.417725  248270 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ifaxfb.g6s3du0ko87s83xe \
	I1210 05:29:42.417846  248270 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:3fdb6ac7c26d35942b0591aa9d148ccb3b5b969098be653ce61486bf954c5d36 
	I1210 05:29:42.417859  248270 cni.go:84] Creating CNI manager for ""
	I1210 05:29:42.417870  248270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:29:42.419498  248270 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 05:29:42.420865  248270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 05:29:42.435368  248270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 05:29:42.463419  248270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 05:29:42.463507  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:42.463555  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-819501 minikube.k8s.io/updated_at=2025_12_10T05_29_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=addons-819501 minikube.k8s.io/primary=true
	I1210 05:29:42.517462  248270 ops.go:34] apiserver oom_adj: -16
	I1210 05:29:42.645586  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:43.145896  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:43.646263  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:44.146506  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:44.646447  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:45.146404  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:45.646503  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:46.146345  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:46.645679  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:46.765786  248270 kubeadm.go:1114] duration metric: took 4.302351478s to wait for elevateKubeSystemPrivileges
	I1210 05:29:46.765837  248270 kubeadm.go:403] duration metric: took 16.770933871s to StartCluster
	I1210 05:29:46.765872  248270 settings.go:142] acquiring lock: {Name:mkfd19ecbf4d1e6319f3bb5fd2369931dc469304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:46.766077  248270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 05:29:46.766575  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/kubeconfig: {Name:mk89e62df614d075d4d9ba9b9215d18e6c14ff0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:46.766803  248270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 05:29:46.766812  248270 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.227 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:29:46.766895  248270 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 05:29:46.767036  248270 config.go:182] Loaded profile config "addons-819501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:46.767055  248270 addons.go:70] Setting yakd=true in profile "addons-819501"
	I1210 05:29:46.767080  248270 addons.go:70] Setting cloud-spanner=true in profile "addons-819501"
	I1210 05:29:46.767080  248270 addons.go:70] Setting default-storageclass=true in profile "addons-819501"
	I1210 05:29:46.767094  248270 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-819501"
	I1210 05:29:46.767101  248270 addons.go:239] Setting addon cloud-spanner=true in "addons-819501"
	I1210 05:29:46.767102  248270 addons.go:239] Setting addon yakd=true in "addons-819501"
	I1210 05:29:46.767110  248270 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-819501"
	I1210 05:29:46.767110  248270 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-819501"
	I1210 05:29:46.767136  248270 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-819501"
	I1210 05:29:46.767140  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767144  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767144  248270 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-819501"
	I1210 05:29:46.767153  248270 addons.go:70] Setting gcp-auth=true in profile "addons-819501"
	I1210 05:29:46.767165  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767173  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767110  248270 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-819501"
	I1210 05:29:46.767198  248270 addons.go:70] Setting inspektor-gadget=true in profile "addons-819501"
	I1210 05:29:46.767209  248270 addons.go:239] Setting addon inspektor-gadget=true in "addons-819501"
	I1210 05:29:46.767208  248270 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-819501"
	I1210 05:29:46.767236  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767251  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.768008  248270 addons.go:70] Setting metrics-server=true in profile "addons-819501"
	I1210 05:29:46.768032  248270 addons.go:239] Setting addon metrics-server=true in "addons-819501"
	I1210 05:29:46.768064  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767182  248270 addons.go:70] Setting ingress=true in profile "addons-819501"
	I1210 05:29:46.768104  248270 addons.go:239] Setting addon ingress=true in "addons-819501"
	I1210 05:29:46.768148  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767190  248270 addons.go:70] Setting ingress-dns=true in profile "addons-819501"
	I1210 05:29:46.768193  248270 addons.go:239] Setting addon ingress-dns=true in "addons-819501"
	I1210 05:29:46.768232  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.768313  248270 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-819501"
	I1210 05:29:46.768289  248270 addons.go:70] Setting storage-provisioner=true in profile "addons-819501"
	I1210 05:29:46.768342  248270 addons.go:70] Setting volcano=true in profile "addons-819501"
	I1210 05:29:46.768349  248270 addons.go:239] Setting addon storage-provisioner=true in "addons-819501"
	I1210 05:29:46.768354  248270 addons.go:239] Setting addon volcano=true in "addons-819501"
	I1210 05:29:46.768375  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.768380  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767174  248270 mustload.go:66] Loading cluster: addons-819501
	I1210 05:29:46.768847  248270 addons.go:70] Setting registry=true in profile "addons-819501"
	I1210 05:29:46.768871  248270 addons.go:239] Setting addon registry=true in "addons-819501"
	I1210 05:29:46.768917  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.769020  248270 config.go:182] Loaded profile config "addons-819501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:46.769085  248270 addons.go:70] Setting registry-creds=true in profile "addons-819501"
	I1210 05:29:46.769091  248270 out.go:179] * Verifying Kubernetes components...
	I1210 05:29:46.769102  248270 addons.go:239] Setting addon registry-creds=true in "addons-819501"
	I1210 05:29:46.769133  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.769444  248270 addons.go:70] Setting volumesnapshots=true in profile "addons-819501"
	I1210 05:29:46.769469  248270 addons.go:239] Setting addon volumesnapshots=true in "addons-819501"
	I1210 05:29:46.769500  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.768333  248270 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-819501"
	I1210 05:29:46.771211  248270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:46.775133  248270 addons.go:239] Setting addon default-storageclass=true in "addons-819501"
	I1210 05:29:46.775185  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.775483  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 05:29:46.775493  248270 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 05:29:46.775702  248270 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 05:29:46.775491  248270 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 05:29:46.776909  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 05:29:46.776932  248270 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 05:29:46.776946  248270 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	W1210 05:29:46.777456  248270 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 05:29:46.777012  248270 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:29:46.777568  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 05:29:46.777745  248270 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 05:29:46.777753  248270 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 05:29:46.777795  248270 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:29:46.777807  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 05:29:46.778720  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 05:29:46.778753  248270 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 05:29:46.779205  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 05:29:46.778868  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.779621  248270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:29:46.779626  248270 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 05:29:46.779644  248270 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 05:29:46.779699  248270 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:29:46.779717  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 05:29:46.779802  248270 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 05:29:46.779812  248270 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:46.781021  248270 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-819501"
	I1210 05:29:46.781066  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.781599  248270 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:29:46.781619  248270 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:29:46.781927  248270 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 05:29:46.781979  248270 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 05:29:46.782811  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 05:29:46.782848  248270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:29:46.783287  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:29:46.782853  248270 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:29:46.783372  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 05:29:46.783764  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 05:29:46.783783  248270 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:29:46.784144  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 05:29:46.784493  248270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 05:29:46.784507  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 05:29:46.784990  248270 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 05:29:46.785462  248270 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 05:29:46.787159  248270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:29:46.787165  248270 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 05:29:46.787232  248270 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 05:29:46.787240  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 05:29:46.787412  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 05:29:46.787542  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.787581  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.787826  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.788403  248270 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:29:46.788672  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 05:29:46.789422  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.789691  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.789726  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.789850  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.789870  248270 out.go:179]   - Using image docker.io/busybox:stable
	I1210 05:29:46.789904  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.789986  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.790377  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.790436  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.790604  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.790708  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.790929  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 05:29:46.791336  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.791379  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.791545  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.791579  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.791918  248270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:29:46.791944  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 05:29:46.792221  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.792333  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.792335  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.792373  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.792580  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.792613  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.793171  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.793390  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.793737  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 05:29:46.793814  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.793846  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.794407  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.794632  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.795534  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.795729  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.795767  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.795997  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.796033  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.796253  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.796350  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.796260  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 05:29:46.796383  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.796916  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.796960  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.796989  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.797284  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.797288  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.797322  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.797369  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.797592  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.798121  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.798164  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.798338  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.798405  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.798805  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.798850  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.798934  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.798992  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 05:29:46.799112  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.799331  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.799362  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.799584  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.800274  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 05:29:46.800293  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 05:29:46.802692  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.803162  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.803185  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.803366  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	W1210 05:29:47.001763  248270 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.50.1:60794->192.168.50.227:22: read: connection reset by peer
	I1210 05:29:47.001799  248270 retry.go:31] will retry after 305.783852ms: ssh: handshake failed: read tcp 192.168.50.1:60794->192.168.50.227:22: read: connection reset by peer
	W1210 05:29:47.014988  248270 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.50.1:60818->192.168.50.227:22: read: connection reset by peer
	I1210 05:29:47.015023  248270 retry.go:31] will retry after 221.795568ms: ssh: handshake failed: read tcp 192.168.50.1:60818->192.168.50.227:22: read: connection reset by peer
	I1210 05:29:47.174748  248270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:29:47.174750  248270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 05:29:47.407045  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:29:47.432282  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 05:29:47.432309  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 05:29:47.482855  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:29:47.501562  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:29:47.503279  248270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 05:29:47.503299  248270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 05:29:47.509563  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:29:47.515555  248270 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 05:29:47.515582  248270 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 05:29:47.525606  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:29:47.562132  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 05:29:47.566586  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:29:47.617239  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 05:29:47.617273  248270 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 05:29:47.645948  248270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 05:29:47.645980  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 05:29:47.758438  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:29:47.910234  248270 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:29:47.910257  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 05:29:47.920337  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 05:29:47.920367  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 05:29:48.015067  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:29:48.027823  248270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 05:29:48.027852  248270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 05:29:48.067181  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 05:29:48.067220  248270 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 05:29:48.292618  248270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 05:29:48.292654  248270 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 05:29:48.352705  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:29:48.609223  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 05:29:48.609250  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 05:29:48.685554  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:29:48.755069  248270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 05:29:48.755098  248270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 05:29:48.842106  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 05:29:48.842167  248270 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 05:29:48.875769  248270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:29:48.875798  248270 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 05:29:49.413506  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:29:49.413534  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 05:29:49.466897  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 05:29:49.466930  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 05:29:49.622705  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 05:29:49.622739  248270 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 05:29:49.796096  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:29:50.219307  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 05:29:50.219336  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 05:29:50.219351  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:29:50.321293  248270 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:50.321319  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 05:29:50.537459  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 05:29:50.537499  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 05:29:50.716098  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:51.041250  248270 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.866447078s)
	I1210 05:29:51.041309  248270 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.866465642s)
	I1210 05:29:51.041340  248270 start.go:977] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1210 05:29:51.042044  248270 node_ready.go:35] waiting up to 6m0s for node "addons-819501" to be "Ready" ...
	I1210 05:29:51.049135  248270 node_ready.go:49] node "addons-819501" is "Ready"
	I1210 05:29:51.049170  248270 node_ready.go:38] duration metric: took 7.101622ms for node "addons-819501" to be "Ready" ...
	I1210 05:29:51.049187  248270 api_server.go:52] waiting for apiserver process to appear ...
	I1210 05:29:51.049251  248270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:29:51.068361  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 05:29:51.068386  248270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 05:29:51.554068  248270 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-819501" context rescaled to 1 replicas
	I1210 05:29:51.613448  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 05:29:51.613477  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 05:29:52.107019  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 05:29:52.107058  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 05:29:52.549779  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:29:52.549811  248270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 05:29:53.265677  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:29:54.221671  248270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 05:29:54.225457  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:54.225987  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:54.226021  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:54.226211  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:55.099859  248270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 05:29:55.502306  248270 addons.go:239] Setting addon gcp-auth=true in "addons-819501"
	I1210 05:29:55.502381  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:55.504619  248270 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 05:29:55.507749  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:55.508396  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:55.508441  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:55.508753  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:55.727192  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.320101368s)
	I1210 05:29:55.727260  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.244355162s)
	I1210 05:29:55.727286  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.217695624s)
	I1210 05:29:55.727349  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.225763647s)
	I1210 05:29:55.727464  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.201812774s)
	I1210 05:29:55.727514  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.165347037s)
	I1210 05:29:55.727599  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.160988535s)
	I1210 05:29:55.727655  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.969192004s)
	W1210 05:29:55.895228  248270 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1210 05:29:58.186837  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.834088583s)
	I1210 05:29:58.186914  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.501321346s)
	I1210 05:29:58.186956  248270 addons.go:495] Verifying addon registry=true in "addons-819501"
	I1210 05:29:58.187022  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.39087132s)
	I1210 05:29:58.187082  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.96769279s)
	I1210 05:29:58.187047  248270 addons.go:495] Verifying addon metrics-server=true in "addons-819501"
	I1210 05:29:58.187133  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.172020307s)
	I1210 05:29:58.187195  248270 addons.go:495] Verifying addon ingress=true in "addons-819501"
	I1210 05:29:58.188701  248270 out.go:179] * Verifying registry addon...
	I1210 05:29:58.188716  248270 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-819501 service yakd-dashboard -n yakd-dashboard
	
	I1210 05:29:58.189735  248270 out.go:179] * Verifying ingress addon...
	I1210 05:29:58.191374  248270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 05:29:58.192560  248270 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 05:29:58.348103  248270 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 05:29:58.348137  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:58.367929  248270 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 05:29:58.367966  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:58.542091  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.825930838s)
	I1210 05:29:58.542168  248270 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.492891801s)
	W1210 05:29:58.542182  248270 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:29:58.542204  248270 api_server.go:72] duration metric: took 11.775367493s to wait for apiserver process to appear ...
	I1210 05:29:58.542216  248270 api_server.go:88] waiting for apiserver healthz status ...
	I1210 05:29:58.542242  248270 api_server.go:253] Checking apiserver healthz at https://192.168.50.227:8443/healthz ...
	I1210 05:29:58.542243  248270 retry.go:31] will retry after 174.698732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:29:58.565722  248270 api_server.go:279] https://192.168.50.227:8443/healthz returned 200:
	ok
	I1210 05:29:58.585143  248270 api_server.go:141] control plane version: v1.34.3
	I1210 05:29:58.585187  248270 api_server.go:131] duration metric: took 42.962592ms to wait for apiserver health ...
	I1210 05:29:58.585201  248270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 05:29:58.669908  248270 system_pods.go:59] 16 kube-system pods found
	I1210 05:29:58.669957  248270 system_pods.go:61] "amd-gpu-device-plugin-xwmk6" [ca338a7b-5d2c-4894-a615-0224cddd49ff] Running
	I1210 05:29:58.669977  248270 system_pods.go:61] "coredns-66bc5c9577-h4zx9" [1a4ca1fc-ccd8-40e7-86e6-ec486935adac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:58.669984  248270 system_pods.go:61] "coredns-66bc5c9577-lwtl7" [d84b8912-1587-45b3-956c-791ea7ec71c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:58.669992  248270 system_pods.go:61] "etcd-addons-819501" [82bf46a7-17c9-462a-96f2-ff9578d2f44b] Running
	I1210 05:29:58.670000  248270 system_pods.go:61] "kube-apiserver-addons-819501" [0925d6e7-492c-4cb7-947a-1540b753d464] Running
	I1210 05:29:58.670006  248270 system_pods.go:61] "kube-controller-manager-addons-819501" [7f7275d9-f455-4620-9240-435ef4487f90] Running
	I1210 05:29:58.670017  248270 system_pods.go:61] "kube-ingress-dns-minikube" [056ca6ed-0cab-42f7-bffb-24f0785fd003] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:58.670027  248270 system_pods.go:61] "kube-proxy-ngpzv" [75c58eba-0463-42c9-a9d6-3c579349bd49] Running
	I1210 05:29:58.670033  248270 system_pods.go:61] "kube-scheduler-addons-819501" [8c2cc920-2a69-4923-8222-c7affed57f02] Running
	I1210 05:29:58.670041  248270 system_pods.go:61] "metrics-server-85b7d694d7-bqdmn" [6439b312-6541-4ed0-94d7-900f65d427bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:58.670051  248270 system_pods.go:61] "nvidia-device-plugin-daemonset-dztkj" [1272b6dc-2104-4d64-9673-e03010d430b4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:58.670060  248270 system_pods.go:61] "registry-6b586f9694-lkhvn" [0a8387d7-19c7-49cd-8425-48c60f2e70ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:58.670076  248270 system_pods.go:61] "registry-creds-764b6fb674-fw65b" [21819916-d847-4fcb-8cd9-d14d7cb387fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:58.670084  248270 system_pods.go:61] "registry-proxy-25pr7" [945db864-8d9f-4e37-b866-28b9f77d42c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:58.670094  248270 system_pods.go:61] "snapshot-controller-7d9fbc56b8-m92vv" [d92b01be-d84c-4f63-88c1-ed58fc9236a3] Pending
	I1210 05:29:58.670101  248270 system_pods.go:61] "storage-provisioner" [76ed88a2-563b-4ee6-9a9a-94669a45bd2a] Running
	I1210 05:29:58.670110  248270 system_pods.go:74] duration metric: took 84.901558ms to wait for pod list to return data ...
	I1210 05:29:58.670120  248270 default_sa.go:34] waiting for default service account to be created ...
	I1210 05:29:58.717796  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:58.755265  248270 default_sa.go:45] found service account: "default"
	I1210 05:29:58.755306  248270 default_sa.go:55] duration metric: took 85.176789ms for default service account to be created ...
	I1210 05:29:58.755322  248270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 05:29:58.837383  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:58.837387  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:58.837931  248270 system_pods.go:86] 17 kube-system pods found
	I1210 05:29:58.837967  248270 system_pods.go:89] "amd-gpu-device-plugin-xwmk6" [ca338a7b-5d2c-4894-a615-0224cddd49ff] Running
	I1210 05:29:58.837983  248270 system_pods.go:89] "coredns-66bc5c9577-h4zx9" [1a4ca1fc-ccd8-40e7-86e6-ec486935adac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:58.838000  248270 system_pods.go:89] "coredns-66bc5c9577-lwtl7" [d84b8912-1587-45b3-956c-791ea7ec71c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:58.838007  248270 system_pods.go:89] "etcd-addons-819501" [82bf46a7-17c9-462a-96f2-ff9578d2f44b] Running
	I1210 05:29:58.838018  248270 system_pods.go:89] "kube-apiserver-addons-819501" [0925d6e7-492c-4cb7-947a-1540b753d464] Running
	I1210 05:29:58.838025  248270 system_pods.go:89] "kube-controller-manager-addons-819501" [7f7275d9-f455-4620-9240-435ef4487f90] Running
	I1210 05:29:58.838036  248270 system_pods.go:89] "kube-ingress-dns-minikube" [056ca6ed-0cab-42f7-bffb-24f0785fd003] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:58.838043  248270 system_pods.go:89] "kube-proxy-ngpzv" [75c58eba-0463-42c9-a9d6-3c579349bd49] Running
	I1210 05:29:58.838049  248270 system_pods.go:89] "kube-scheduler-addons-819501" [8c2cc920-2a69-4923-8222-c7affed57f02] Running
	I1210 05:29:58.838060  248270 system_pods.go:89] "metrics-server-85b7d694d7-bqdmn" [6439b312-6541-4ed0-94d7-900f65d427bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:58.838075  248270 system_pods.go:89] "nvidia-device-plugin-daemonset-dztkj" [1272b6dc-2104-4d64-9673-e03010d430b4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:58.838101  248270 system_pods.go:89] "registry-6b586f9694-lkhvn" [0a8387d7-19c7-49cd-8425-48c60f2e70ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:58.838115  248270 system_pods.go:89] "registry-creds-764b6fb674-fw65b" [21819916-d847-4fcb-8cd9-d14d7cb387fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:58.838123  248270 system_pods.go:89] "registry-proxy-25pr7" [945db864-8d9f-4e37-b866-28b9f77d42c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:58.838130  248270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m92vv" [d92b01be-d84c-4f63-88c1-ed58fc9236a3] Pending
	I1210 05:29:58.838137  248270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xhmx9" [f17bb4a5-df22-4e1c-a6dd-a37a43712cbb] Pending
	I1210 05:29:58.838143  248270 system_pods.go:89] "storage-provisioner" [76ed88a2-563b-4ee6-9a9a-94669a45bd2a] Running
	I1210 05:29:58.838154  248270 system_pods.go:126] duration metric: took 82.823961ms to wait for k8s-apps to be running ...
	I1210 05:29:58.838177  248270 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 05:29:58.838240  248270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:29:59.216996  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:59.217048  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:59.760212  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:59.799267  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:00.028379  248270 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.5237158s)
	I1210 05:30:00.030605  248270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:30:00.031844  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.766107934s)
	I1210 05:30:00.031919  248270 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-819501"
	I1210 05:30:00.033501  248270 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 05:30:00.033501  248270 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 05:30:00.035389  248270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 05:30:00.035424  248270 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 05:30:00.036495  248270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 05:30:00.092418  248270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 05:30:00.092524  248270 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 05:30:00.099191  248270 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 05:30:00.099218  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:00.154497  248270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:30:00.154523  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 05:30:00.218466  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:00.218476  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:00.239458  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:30:00.551381  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:00.700051  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:00.700489  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:01.046588  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:01.060958  248270 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.222683048s)
	I1210 05:30:01.060998  248270 system_svc.go:56] duration metric: took 2.222816606s WaitForService to wait for kubelet
	I1210 05:30:01.061010  248270 kubeadm.go:587] duration metric: took 14.294174339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:30:01.061035  248270 node_conditions.go:102] verifying NodePressure condition ...
	I1210 05:30:01.060959  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.343106707s)
	I1210 05:30:01.067487  248270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 05:30:01.067520  248270 node_conditions.go:123] node cpu capacity is 2
	I1210 05:30:01.067536  248270 node_conditions.go:105] duration metric: took 6.493768ms to run NodePressure ...
	I1210 05:30:01.067549  248270 start.go:242] waiting for startup goroutines ...
	I1210 05:30:01.200588  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:01.203833  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:01.575049  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:01.783678  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:01.820709  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:01.901619  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.662111316s)
	I1210 05:30:01.903054  248270 addons.go:495] Verifying addon gcp-auth=true in "addons-819501"
	I1210 05:30:01.905447  248270 out.go:179] * Verifying gcp-auth addon...
	I1210 05:30:01.908030  248270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 05:30:01.971590  248270 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 05:30:01.971620  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:02.099231  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:02.211381  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:02.211475  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:02.423901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:02.544501  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:02.700413  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:02.702741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:02.917043  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:03.043724  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:03.195997  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:03.200750  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:03.422696  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:03.542204  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:03.699053  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:03.702004  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:03.913811  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:04.042408  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:04.197038  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:04.197194  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:04.414256  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:04.544503  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:04.696289  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:04.699139  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:04.912192  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:05.043926  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:05.197317  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:05.198154  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:05.413841  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:05.542630  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:05.836234  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:05.837463  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:05.912581  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:06.041785  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:06.197071  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:06.198021  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:06.412100  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:06.541405  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:06.696562  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:06.697300  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:06.912034  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:07.040758  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:07.195563  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:07.196426  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:07.414799  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:07.541759  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:07.699852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:07.700171  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:07.913279  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:08.042406  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:08.199694  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:08.200267  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:08.413210  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:08.541549  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:08.694572  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:08.700820  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:08.913565  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:09.043130  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:09.199805  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:09.200384  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:09.413468  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:09.738431  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:09.743709  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:09.744006  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:09.913178  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:10.045294  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:10.201112  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:10.201536  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:10.412804  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:10.543961  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:10.700658  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:10.702942  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:10.913710  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:11.042129  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:11.198908  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:11.204061  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:11.412990  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:11.540719  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:11.701614  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:11.702763  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:11.914546  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:12.042555  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:12.197852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:12.198653  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:12.417360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:12.542814  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:12.697802  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:12.699425  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:12.913723  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:13.040006  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:13.195864  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:13.199933  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:13.418096  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:13.543369  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:13.699360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:13.699489  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:13.912674  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:14.040435  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:14.197368  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:14.198434  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:14.413640  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:14.540394  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:14.696663  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:14.699389  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:14.915541  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:15.429953  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:15.433247  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:15.435508  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:15.435521  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:15.540749  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:15.699596  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:15.700467  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:15.916459  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:16.041580  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:16.195018  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:16.197575  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:16.412219  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:16.546078  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:16.718549  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:16.718656  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:16.912761  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:17.049720  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:17.199564  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:17.199795  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:17.416037  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:17.544532  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:17.699384  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:17.700731  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:17.945756  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:18.041320  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:18.200647  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:18.200899  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:18.413830  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:18.546581  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:18.697003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:18.702274  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:18.912631  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:19.043237  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:19.196644  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:19.197045  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:19.412567  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:19.540612  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:19.695730  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:19.698016  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:19.913692  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:20.045701  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:20.197249  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:20.197641  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:20.413847  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:20.542656  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:20.698818  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:20.704499  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:20.918024  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:21.042453  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:21.197201  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:21.203709  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:21.415612  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:21.547180  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:21.697089  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:21.698183  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:21.913362  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:22.048347  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:22.195852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:22.198596  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:22.414126  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:22.541638  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:22.695242  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:22.698366  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:23.021289  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:23.042294  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:23.198290  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:23.199072  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:23.414047  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:23.554644  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:23.701970  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:23.705112  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:23.913583  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:24.061821  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:24.199309  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:24.203209  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:24.418670  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:24.541305  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:24.696346  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:24.700118  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:24.915306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:25.053528  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:25.196902  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:25.197440  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:25.413854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:25.542391  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:25.701623  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:25.704483  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:25.911945  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:26.046126  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:26.212011  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:26.214603  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:26.412412  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:26.543055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:26.696033  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:26.699135  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:26.911772  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:27.040667  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:27.195375  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:27.197664  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:27.412483  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:27.541678  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:27.700619  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:27.701393  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:27.912791  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:28.040728  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:28.198661  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:28.201110  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:28.416573  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:28.904196  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:28.904398  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:28.904575  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:28.913235  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:29.045841  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:29.200545  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:29.203445  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:29.411788  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:29.542883  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:29.699400  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:29.701573  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:29.913680  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:30.040168  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:30.197412  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:30.202099  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:30.413962  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:30.542075  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:30.697823  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:30.699758  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:30.933743  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:31.041743  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:31.200789  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:31.202078  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:31.411894  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:31.543354  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:31.696082  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:31.697635  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:31.915844  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:32.042712  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:32.195680  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:32.196329  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:32.413623  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:32.541402  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:32.697475  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:32.700593  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:32.914887  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:33.043306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:33.197100  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:33.199780  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:33.412378  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:33.542047  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:33.696129  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:33.696893  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:33.912809  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:34.040753  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:34.195477  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:34.196560  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:34.417130  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:34.543287  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:34.702104  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:34.702245  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:34.912016  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:35.040624  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:35.196109  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:35.196529  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:35.411973  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:35.540615  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:35.697044  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:35.697719  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:35.913698  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:36.040429  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:36.195759  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:36.196067  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:36.413418  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:36.541807  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:36.698339  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:36.698627  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:36.912470  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:37.042035  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:37.195635  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:37.195732  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:37.412529  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:37.541462  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:37.696355  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:37.696358  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:37.911987  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:38.040986  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:38.195127  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:38.197401  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:38.412456  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:38.541008  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:38.696652  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:38.696855  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:38.912382  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:39.040953  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:39.195628  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:39.197046  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:39.411281  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:39.541262  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:39.696395  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:39.696643  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:39.912042  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:40.041169  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:40.195226  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:40.196941  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:40.413324  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:40.540517  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:40.695280  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:40.697540  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:40.912944  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:41.041169  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:41.196610  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:41.197062  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:41.411344  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:41.541121  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:41.697442  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:41.697602  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:41.912587  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:42.040852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:42.195549  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:42.196962  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:42.412892  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:42.540835  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:42.695893  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:42.697068  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:42.912138  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:43.041313  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:43.197278  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:43.197476  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:43.412390  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:43.541382  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:43.695572  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:43.696270  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:43.911719  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:44.040923  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:44.199247  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:44.200272  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:44.412362  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:44.543558  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:44.695243  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:44.696776  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:44.912579  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:45.040311  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:45.196524  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:45.196806  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:45.412967  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:45.541112  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:45.695648  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:45.698135  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:45.911238  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:46.040981  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:46.196195  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:46.197106  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:46.413618  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:46.541179  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:46.700444  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:46.700579  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:46.912054  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:47.040909  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:47.196050  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:47.197636  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:47.412575  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:47.540223  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:47.697846  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:47.698230  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:47.912349  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:48.042055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:48.195871  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:48.198187  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:48.411783  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:48.542256  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:48.695546  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:48.698050  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:48.911939  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:49.041385  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:49.196163  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:49.196353  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:49.412001  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:49.541756  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:49.695783  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:49.696993  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:49.911727  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:50.040528  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:50.196370  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:50.196506  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:50.412758  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:50.542086  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:50.697092  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:50.698043  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:50.912949  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:51.041707  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:51.195044  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:51.196530  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:51.412015  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:51.540676  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:51.695253  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:51.697115  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:51.911838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:52.040571  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:52.195854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:52.197767  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:52.412165  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:52.541296  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:52.695780  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:52.698078  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:52.911868  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:53.040966  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:53.196250  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:53.198224  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:53.412952  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:53.541033  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:53.695318  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:53.697975  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:53.918128  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:54.041725  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:54.196856  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:54.196973  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:54.412680  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:54.541389  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:54.696707  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:54.697417  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:54.911941  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:55.041035  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:55.195704  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:55.197936  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:55.416128  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:55.540974  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:55.696532  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:55.696605  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:55.912032  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:56.041144  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:56.195844  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:56.197459  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:56.412239  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:56.543267  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:56.696985  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:56.697065  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:56.911959  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:57.041069  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:57.196450  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:57.197271  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:57.411484  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:57.543169  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:57.698956  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:57.700761  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:57.912297  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:58.041754  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:58.195094  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:58.198093  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:58.412335  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:58.547820  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:58.697087  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:58.697216  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:58.911898  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:59.041334  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:59.195790  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:59.197287  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:59.412314  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:59.541645  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:59.695025  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:59.696945  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:59.913032  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:00.042206  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:00.195905  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:00.196849  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:00.413416  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:00.541940  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:00.695570  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:00.697495  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:00.912083  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:01.040980  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:01.197600  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:01.197750  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:01.413084  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:01.541003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:01.701147  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:01.701307  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:01.912239  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:02.041838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:02.197451  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:02.199381  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:02.411848  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:02.566113  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:02.697162  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:02.697404  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:02.912554  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:03.041300  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:03.197265  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:03.197744  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:03.412283  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:03.541313  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:03.697345  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:03.697362  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:03.912456  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:04.041433  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:04.196053  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:04.196386  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:04.411480  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:04.541396  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:04.697012  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:04.697122  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:04.912021  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:05.040986  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:05.196425  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:05.197055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:05.414392  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:05.543906  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:05.697548  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:05.699128  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:05.914449  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:06.059122  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:06.199035  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:06.199060  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:06.411555  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:06.548145  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:06.706728  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:06.710127  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:06.913080  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:07.045418  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:07.197313  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:07.198838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:07.412442  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:07.550194  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:07.698588  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:07.700102  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:07.912898  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:08.041523  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:08.200048  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:08.201764  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:08.416732  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:08.541668  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:08.696805  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:08.699404  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:08.913919  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:09.044035  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:09.202083  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:09.203196  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:09.424278  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:09.543568  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:09.694390  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:09.696701  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:09.928227  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:10.047635  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:10.202711  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:10.203091  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:10.412177  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:10.547099  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:10.701159  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:10.701427  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:11.010654  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:11.047478  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:11.197452  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:11.197505  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:11.412499  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:11.542966  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:11.699944  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:11.704028  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:11.913615  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:12.040550  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:12.202167  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:12.206422  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:12.414111  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:12.541586  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:12.701242  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:12.701392  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:12.911980  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:13.041255  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:13.196819  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:13.197262  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:13.415365  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:13.543891  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:13.698298  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:13.698483  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:14.024018  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:14.044419  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:14.200005  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:14.200056  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:14.414780  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:14.545383  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:14.698157  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:14.698240  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:14.912312  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:15.047766  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:15.200507  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:15.201630  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:15.414359  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:15.542858  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:15.696360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:15.697064  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:15.914308  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:16.042374  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:16.203003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:16.205670  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:16.415017  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:16.547467  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:16.698087  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:16.704457  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:16.912729  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:17.042682  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:17.197758  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:17.199865  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:17.415143  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:17.543861  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:17.697284  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:17.697374  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:17.912774  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:18.051952  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:18.198347  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:18.198464  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:18.413101  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:18.544220  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:18.697480  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:18.698347  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:18.912174  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:19.041195  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:19.195597  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:19.197754  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:19.411930  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:19.543107  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:19.695716  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:19.696506  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:19.911557  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:20.040046  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:20.195570  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:20.197384  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:20.411741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:20.540714  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:20.696789  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:20.696930  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:20.912723  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:21.040277  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:21.196028  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:21.196867  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:21.412845  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:21.540858  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:21.695001  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:21.697856  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:21.913055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:22.041789  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:22.195447  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:22.197793  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:22.412097  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:22.541100  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:22.695437  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:22.697741  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:22.912944  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:23.041760  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:23.195078  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:23.196924  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:23.411992  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:23.540721  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:23.696368  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:23.697952  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:23.924611  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:24.042614  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:24.195700  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:24.197113  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:24.412165  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:24.542539  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:24.696569  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:24.699116  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:24.912589  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:25.040194  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:25.197311  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:25.197820  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:25.412984  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:25.541361  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:25.696760  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:25.699224  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:25.911979  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:26.041540  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:26.195316  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:26.196448  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:26.412932  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:26.540888  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:26.695457  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:26.697418  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:26.912839  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:27.040692  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:27.198285  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:27.198389  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:27.413216  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:27.541054  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:27.695524  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:27.697682  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:27.912984  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:28.041063  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:28.196678  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:28.197402  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:28.412505  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:28.540110  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:28.695717  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:28.697736  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:28.912387  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:29.041413  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:29.194635  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:29.196805  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:29.412044  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:29.542788  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:29.695337  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:29.697270  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:29.911726  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:30.041798  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:30.195904  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:30.197264  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:30.411616  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:30.540508  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:30.697339  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:30.697782  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:30.913547  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:31.040452  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:31.195922  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:31.196535  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:31.411982  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:31.540478  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:31.695278  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:31.698540  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:31.912494  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:32.043856  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:32.197609  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:32.197819  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:32.412431  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:32.541752  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:32.696539  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:32.697403  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:32.912910  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:33.041048  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:33.195704  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:33.197917  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:33.412695  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:33.540734  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:33.695267  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:33.697595  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:33.912951  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:34.040593  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:34.194646  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:34.197266  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:34.411742  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:34.540945  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:34.698161  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:34.698313  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:34.912160  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:35.042016  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:35.195246  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:35.197425  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:35.412035  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:35.541521  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:35.694583  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:35.697617  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:35.911895  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:36.040992  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:36.196447  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:36.197672  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:36.412372  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:36.541192  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:36.697869  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:36.699654  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:36.912908  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:37.040956  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:37.196801  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:37.196942  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:37.411935  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:37.541058  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:37.694918  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:37.696472  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:37.912836  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:38.040660  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:38.194944  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:38.197448  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:38.411791  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:38.541124  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:38.697144  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:38.697809  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:38.912697  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:39.040461  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:39.194656  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:39.196407  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:39.411925  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:39.541913  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:39.695201  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:39.696659  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:39.912467  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:40.040722  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:40.195735  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:40.196428  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:40.412510  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:40.540388  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:40.695082  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:40.696506  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:40.912222  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:41.041847  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:41.197091  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:41.197567  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:41.412898  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:41.540868  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:41.697534  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:41.697902  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:41.912633  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:42.040266  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:42.196150  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:42.198086  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:42.411854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:42.541196  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:42.697014  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:42.698518  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:42.912523  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:43.040145  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:43.195475  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:43.197044  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:43.412242  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:43.540853  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:43.695064  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:43.696905  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:43.912652  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:44.040413  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:44.195534  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:44.196588  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:44.412338  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:44.541409  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:44.696199  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:44.696325  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:44.912351  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:45.041899  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:45.195069  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:45.197614  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:45.411817  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:45.540975  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:45.696734  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:45.696956  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:45.912435  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:46.041440  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:46.194926  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:46.197727  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:46.412648  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:46.540969  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:46.695484  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:46.699909  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:46.914009  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:47.043433  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:47.197574  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:47.197992  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:47.412334  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:47.541535  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:47.694711  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:47.695578  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:47.912973  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:48.040846  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:48.195216  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:48.196738  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:48.412375  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:48.542493  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:48.696138  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:48.696493  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:48.912933  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:49.041324  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:49.195693  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:49.196820  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:49.412830  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:49.542225  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:49.696086  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:49.696301  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:49.911815  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:50.040698  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:50.196781  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:50.196831  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:50.412677  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:50.541012  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:50.695171  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:50.696197  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:50.912665  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:51.040391  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:51.196254  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:51.196414  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:51.411787  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:51.546678  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:51.697612  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:51.697836  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:51.912801  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:52.041759  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:52.196133  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:52.197930  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:52.413220  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:52.541923  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:52.695369  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:52.696907  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:52.913889  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:53.040661  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:53.196476  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:53.196983  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:53.411076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:53.541834  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:53.696458  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:53.696646  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:53.912125  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:54.041959  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:54.195464  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:54.197614  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:54.412425  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:54.541916  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:54.698408  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:54.699009  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:54.912043  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:55.041485  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:55.196212  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:55.196795  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:55.412287  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:55.541805  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:55.695021  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:55.697644  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:55.912403  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:56.041280  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:56.196215  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:56.196845  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:56.412575  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:56.540086  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:56.695689  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:56.698723  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:56.912185  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:57.041278  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:57.195718  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:57.196357  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:57.415350  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:57.541275  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:57.695983  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:57.696509  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:57.912626  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:58.041539  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:58.195051  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:58.196590  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:58.411806  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:58.541000  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:58.696600  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:58.697564  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:58.912135  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:59.041032  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:59.196602  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:59.196745  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:59.412270  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:59.542109  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:59.696566  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:59.697555  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:59.912666  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:00.040543  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:00.195854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:00.196971  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:00.411627  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:00.541861  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:00.695413  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:00.697418  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:00.913487  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:01.042233  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:01.195394  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:01.197595  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:01.412418  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:01.541658  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:01.697383  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:01.698671  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:01.914029  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:02.042979  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:02.198534  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:02.198746  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:02.413488  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:02.540091  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:02.699039  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:02.699243  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:02.913260  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:03.042351  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:03.196133  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:03.196797  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:03.412055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:03.540813  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:03.695810  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:03.696278  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:03.912607  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:04.040336  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:04.195923  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:04.197906  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:04.412339  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:04.541423  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:04.697319  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:04.697522  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:04.911871  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:05.042054  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:05.196538  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:05.197169  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:05.412220  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:05.541589  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:05.695349  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:05.697593  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:05.911341  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:06.041432  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:06.196668  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:06.196868  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:06.412383  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:06.541423  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:06.696264  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:06.696298  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:06.912896  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:07.041463  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:07.196315  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:07.196358  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:07.411993  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:07.540392  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:07.696388  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:07.697104  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:07.915258  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:08.041599  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:08.196372  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:08.197301  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:08.413332  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:08.541386  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:08.700566  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:08.700862  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:08.912947  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:09.042060  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:09.195871  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:09.197176  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:09.411919  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:09.541000  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:09.695999  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:09.697329  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:09.912222  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:10.042718  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:10.196344  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:10.196356  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:10.411381  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:10.541528  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:10.696498  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:10.698344  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:10.912694  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:11.041130  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:11.195351  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:11.197663  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:11.412589  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:11.540341  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:11.697469  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:11.699618  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:11.912519  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:12.041597  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:12.195947  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:12.197653  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:12.412598  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:12.540709  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:12.696715  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:12.698050  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:12.928026  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:13.045349  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:13.197477  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:13.197509  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:13.412404  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:13.542237  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:13.695946  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:13.696615  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:13.911988  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:14.040943  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:14.196098  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:14.197927  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:14.413238  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:14.544438  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:14.696894  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:14.697344  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:14.912008  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:15.040574  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:15.196776  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:15.197552  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:15.411737  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:15.540831  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:15.696509  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:15.698462  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:15.913052  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:16.041218  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:16.195703  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:16.198051  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:16.411991  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:16.541641  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:16.695803  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:16.696905  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:16.911898  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:17.041120  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:17.197033  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:17.197238  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:17.411848  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:17.541354  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:17.695478  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:17.696944  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:17.913147  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:18.042436  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:18.196145  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:18.196396  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:18.411839  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:18.540354  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:18.696245  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:18.697787  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:18.912398  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:19.042406  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:19.196069  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:19.199304  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:19.412956  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:19.544144  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:19.699711  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:19.702208  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:19.911864  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:20.043905  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:20.198903  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:20.199000  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:20.422787  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:20.544150  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:20.700200  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:20.704668  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:20.917309  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:21.046645  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:21.195931  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:21.196243  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:21.413120  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:21.546436  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:21.700169  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:21.700244  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:21.914444  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:22.047410  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:22.200096  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:22.203625  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:22.417682  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:22.546754  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:22.698047  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:22.703046  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:22.914138  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:23.045377  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:23.200843  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:23.201169  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:23.413890  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:23.543372  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:23.696626  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:23.697747  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:23.916852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:24.043284  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:24.196831  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:24.196918  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:24.412927  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:24.541136  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:24.699213  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:24.701183  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:24.914129  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:25.043092  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:25.321913  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:25.323020  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:25.413045  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:25.542232  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:25.698054  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:25.699672  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:25.914677  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:26.045854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:26.197868  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:26.197922  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:26.411543  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:26.550068  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:26.695076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:26.699427  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:26.912378  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:27.042894  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:27.197017  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:27.199935  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:27.417341  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:27.541748  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:27.695988  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:27.698216  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:27.911305  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:28.043682  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:28.203306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:28.203790  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:28.413791  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:28.548187  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:28.705853  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:28.706201  248270 kapi.go:107] duration metric: took 2m30.513642085s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 05:32:28.911698  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:29.040817  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:29.197936  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:29.421552  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:29.541741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:29.696634  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:29.912225  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:30.041603  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:30.195204  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:30.411259  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:30.541923  248270 kapi.go:107] duration metric: took 2m30.505431248s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 05:32:30.700458  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:30.916295  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:31.198295  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:31.413624  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:31.701677  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:31.934977  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:32.199414  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:32.418730  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:32.695325  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:32.913157  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:33.197510  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:33.413102  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:33.696635  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:33.912976  248270 kapi.go:107] duration metric: took 2m32.004940635s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 05:32:33.914947  248270 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-819501 cluster.
	I1210 05:32:33.916673  248270 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 05:32:33.918159  248270 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 05:32:34.195642  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:34.696627  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:35.196024  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:35.695782  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:36.195094  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:36.696145  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:37.195456  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:37.695683  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:38.240338  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:38.696247  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:39.196496  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:39.695741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:40.196422  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:40.696471  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:41.196332  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:41.695958  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:42.196570  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:42.695829  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:43.195357  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:43.695795  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:44.195089  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:44.697101  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:45.195298  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:45.696306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:46.196188  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:46.697545  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:47.195693  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:47.697106  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:48.196501  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:48.696838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:49.196330  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:49.697408  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:50.196426  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:50.695540  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:51.196892  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:51.696131  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:52.195514  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:52.695825  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:53.195964  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:53.696041  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:54.195223  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:54.695628  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:55.196841  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:55.695740  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:56.194920  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:56.696783  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:57.195842  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:57.696514  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:58.196291  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:58.696032  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:59.195757  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:59.695913  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:00.196003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:00.697408  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:01.196132  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:01.696525  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:02.195519  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:02.696676  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:03.196468  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:03.697155  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:04.196040  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:04.695496  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:05.197092  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:05.696051  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:06.194871  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:06.696031  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:07.196751  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:07.697187  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:08.195603  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:08.696317  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:09.196085  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:09.696248  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:10.196156  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:10.695296  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:11.196584  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:11.700018  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:12.195142  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:12.697083  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:13.195224  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:13.695306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:14.196901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:14.696058  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:15.195574  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:15.696336  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:16.195623  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:16.698100  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:17.197329  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:17.695675  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:18.196683  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:18.697097  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:19.195492  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:19.695645  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:20.197262  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:20.695427  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:21.196795  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:21.697360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:22.196642  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:22.696362  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:23.195333  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:23.695816  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:24.195957  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:24.694821  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:25.195992  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:25.695334  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:26.196484  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:26.697312  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:27.196789  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:27.695382  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:28.197511  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:28.696029  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:29.195379  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:29.696478  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:30.196742  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:30.696617  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:31.196691  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:31.696332  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:32.197105  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:32.695261  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:33.195565  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:33.696412  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:34.196853  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:34.695141  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:35.196017  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:35.694870  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:36.196760  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:36.695716  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:37.197084  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:37.694798  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:38.195481  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:38.696818  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:39.195103  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:39.695287  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:40.196285  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:40.695441  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:41.196901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:41.695151  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:42.196592  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:42.695791  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:43.195539  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:43.697526  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:44.195494  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:44.697258  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:45.195374  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:45.696246  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:46.195360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:46.696595  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:47.195386  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:47.696495  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:48.195396  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:48.696399  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:49.195926  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:49.695752  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:50.196644  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:50.696819  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:51.196463  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:51.695510  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:52.196329  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:52.695117  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:53.195544  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:53.696499  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:54.196995  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:54.695938  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:55.196716  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:55.696111  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:56.195836  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:56.697347  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:57.196443  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:57.696024  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:58.196687  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:58.696411  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:59.195151  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:59.696081  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:00.196766  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:00.695605  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:01.196288  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:01.696235  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:02.195823  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:02.695868  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:03.196217  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:03.695561  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:04.195933  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:04.696154  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:05.195390  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:05.696250  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:06.196127  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:06.695227  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:07.200317  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:07.696082  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:08.197235  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:08.696004  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:09.196361  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:09.695240  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:10.196104  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:10.695419  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:11.196319  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:11.694964  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:12.196974  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:12.696337  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:13.195978  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:13.696471  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:14.195269  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:14.695674  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:15.197294  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:15.695137  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:16.196248  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:16.694996  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:17.196381  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:17.695404  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:18.196195  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:18.695758  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:19.196179  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:19.695177  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:20.195565  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:20.696242  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:21.196197  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:21.695191  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:22.196188  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:22.694979  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:23.194913  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:23.697081  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:24.196158  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:24.694728  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:25.196611  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:25.697492  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:26.196964  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:26.696515  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:27.195840  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:27.695719  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:28.196490  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:28.696390  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:29.195290  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:29.695663  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:30.201407  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:30.695974  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:31.197190  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:31.695471  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:32.196744  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:32.696589  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:33.195808  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:33.696661  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:34.196415  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:34.695620  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:35.195699  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:35.696759  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:36.196702  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:36.695614  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:37.194964  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:37.696076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:38.196210  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:38.696076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:39.196613  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:39.696165  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:40.198418  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:40.695943  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:41.197398  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:41.695772  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:42.195040  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:42.696314  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:43.195340  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:43.696359  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:44.196391  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:44.696030  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:45.195397  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:45.696360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:46.195580  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:46.696472  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:47.196255  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:47.695598  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:48.195578  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:48.696577  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:49.197485  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:49.695598  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:50.196297  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:50.694812  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:51.196752  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:51.697037  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:52.196530  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:52.696049  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:53.196414  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:53.696286  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:54.196246  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:54.696013  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:55.197119  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:55.695940  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:56.195722  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:56.695900  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:57.195405  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:57.694853  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:58.195846  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:58.696065  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:59.196493  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:59.695901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:00.196143  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:00.695285  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:01.197326  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:01.694920  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:02.194762  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:02.696331  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:03.195592  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:03.696344  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:04.195284  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:04.695602  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:05.195944  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:05.695625  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:06.195741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:06.697910  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:07.196058  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:07.694812  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:08.195849  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:08.697178  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:09.195957  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:09.696494  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:10.196796  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:10.696794  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:11.196939  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:11.695777  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:12.196354  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:12.697453  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:13.195090  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:13.695593  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:14.195988  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:14.699457  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:15.196105  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:15.695271  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:16.195646  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:16.696815  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:17.195859  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:17.696286  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:18.195152  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:18.696022  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:19.196238  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:19.696329  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:20.195730  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:20.696445  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:21.196514  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:21.695588  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:22.197806  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:22.697353  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:23.196082  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:23.696133  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:24.196367  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:24.696099  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:25.195852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:25.695686  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:26.195500  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:26.697384  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:27.196018  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:27.694823  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:28.196076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:28.695462  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:29.195471  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:29.696297  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:30.196124  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:30.696556  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:31.196283  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:31.695815  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:32.196641  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:32.697074  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:33.195733  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:33.697646  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:34.195935  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:34.696025  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:35.195838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:35.695200  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:36.196244  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:36.696951  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:37.196003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:37.695262  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:38.196037  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:38.695111  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:39.195459  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:39.696434  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:40.198620  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:40.697017  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:41.195949  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:41.695731  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:42.195894  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:42.695975  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:43.194970  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:43.695677  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:44.197208  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:44.695958  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:45.196050  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:45.695370  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:46.195901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:46.695795  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:47.196346  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:47.695774  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:48.194982  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:48.697065  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:49.195134  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:49.694745  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:50.195670  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:50.695951  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:51.196052  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:51.696732  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:52.197195  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:52.696093  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:53.195202  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:53.695405  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:54.196119  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:54.696476  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:55.196403  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:55.695788  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:56.195036  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:56.696466  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:57.198284  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:57.695289  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:58.191968  248270 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1210 05:35:58.192004  248270 kapi.go:107] duration metric: took 6m0.000635436s to wait for kubernetes.io/minikube-addons=registry ...
	W1210 05:35:58.192136  248270 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1210 05:35:58.194007  248270 out.go:179] * Enabled addons: inspektor-gadget, ingress-dns, storage-provisioner, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, default-storageclass, registry-creds, metrics-server, yakd, volumesnapshots, ingress, csi-hostpath-driver, gcp-auth
	I1210 05:35:58.195457  248270 addons.go:530] duration metric: took 6m11.428581243s for enable addons: enabled=[inspektor-gadget ingress-dns storage-provisioner amd-gpu-device-plugin cloud-spanner nvidia-device-plugin default-storageclass registry-creds metrics-server yakd volumesnapshots ingress csi-hostpath-driver gcp-auth]
	I1210 05:35:58.195518  248270 start.go:247] waiting for cluster config update ...
	I1210 05:35:58.195551  248270 start.go:256] writing updated cluster config ...
	I1210 05:35:58.195954  248270 ssh_runner.go:195] Run: rm -f paused
	I1210 05:35:58.205700  248270 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:35:58.211367  248270 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lwtl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.216998  248270 pod_ready.go:94] pod "coredns-66bc5c9577-lwtl7" is "Ready"
	I1210 05:35:58.217026  248270 pod_ready.go:86] duration metric: took 5.6329ms for pod "coredns-66bc5c9577-lwtl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.219507  248270 pod_ready.go:83] waiting for pod "etcd-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.225094  248270 pod_ready.go:94] pod "etcd-addons-819501" is "Ready"
	I1210 05:35:58.225120  248270 pod_ready.go:86] duration metric: took 5.593139ms for pod "etcd-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.227244  248270 pod_ready.go:83] waiting for pod "kube-apiserver-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.231410  248270 pod_ready.go:94] pod "kube-apiserver-addons-819501" is "Ready"
	I1210 05:35:58.231431  248270 pod_ready.go:86] duration metric: took 4.167307ms for pod "kube-apiserver-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.234812  248270 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.610624  248270 pod_ready.go:94] pod "kube-controller-manager-addons-819501" is "Ready"
	I1210 05:35:58.610654  248270 pod_ready.go:86] duration metric: took 375.820379ms for pod "kube-controller-manager-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.811334  248270 pod_ready.go:83] waiting for pod "kube-proxy-ngpzv" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:59.211461  248270 pod_ready.go:94] pod "kube-proxy-ngpzv" is "Ready"
	I1210 05:35:59.211491  248270 pod_ready.go:86] duration metric: took 400.130316ms for pod "kube-proxy-ngpzv" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:59.410382  248270 pod_ready.go:83] waiting for pod "kube-scheduler-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:59.811154  248270 pod_ready.go:94] pod "kube-scheduler-addons-819501" is "Ready"
	I1210 05:35:59.811187  248270 pod_ready.go:86] duration metric: took 400.778411ms for pod "kube-scheduler-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:59.811204  248270 pod_ready.go:40] duration metric: took 1.605466877s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:35:59.859434  248270 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 05:35:59.861511  248270 out.go:179] * Done! kubectl is now configured to use "addons-819501" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.767145329Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb4d7d07-84b3-47c7-a1ce-5d2ac28606b4 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.768462236Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd271516-d868-4f15-bb41-6401d13cf838 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.769960405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765345173769903215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:457883,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd271516-d868-4f15-bb41-6401d13cf838 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.772165376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7995790-c6f1-4390-a393-c1f6d00f9062 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.772231019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7995790-c6f1-4390-a393-c1f6d00f9062 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.772696253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25e1deb92b90406521b7a79e047d89c517cd035a44e88d682c0941a2f0d94ceb,PodSandboxId:ba29e6b659914c39c0bee679c2c220da4d0fd56110470e69c0b07a5dc5a75614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345033128650784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 390b2e80-0538-4ebe-ae5c-2e24388c48e0,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73d958852375d517ec8e880e264554b882f25961e3b94c83033ce6fdf91dfcd,PodSandboxId:14f0fd53f12b8b26bc3ff88bf56950e4a57714e825ab5e0f66c12e4379f63f4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765344963193852703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 479ad0f6-afd3-427d-9618-0e77a36d2f86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d8e18ef2c2cf6472469d4043a4222f46726b86bcab7610255b480447adb2be,PodSandboxId:f058cab2480c7d799072694b7c8c0fb6ea0de44d988450241dde72510f242833,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765344747838857624,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-qvqbv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60762df1-e222-42ac-8625-e7a791ed54fb,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0132a3156c2350c4ca37566650c76d85be934240cfda6814755e1c70c405a7ba,PodSandboxId:647f18d0c2136d2fb9fb40cbe8ba48376cd429a31d9c024c540caf218fd88bad,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765344669449773465,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6vldh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 183ba921-49cb-457b-b3ca-887a4e18611b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32d43f082316163c31af50b7606ae33e7b63110d400c78ff0162fbcf8fa78c7f,PodSandboxId:d60bc259de4249c983ddff2c11c39172462904a2e9b46f44eb941a3e1c49db0e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765344667539885433,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2c4kn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7d13dd8e-62cb-446b-a718-e15ada0b80a3,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668d0c1d28737b284756a644d3460fac7e536f89dc7c5792d6ff78707639682,PodSandboxId:1e43763de09da6e66f05a5b77ff6a2d0f556cf6c72f5baacc6667687d5ad0c9a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765344666031370479,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vsz96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db81abdfb84eb3c48379ba982a1f44c8e7d2887cfe2d24f6d0d020e96bff32,PodSandboxId:a5a64939f7ce8b3d1d4298014698f9340fd0246eb2520d98dd053abe11a1aaaa,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63c
a56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765344634678579440,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-25pr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945db864-8d9f-4e37-b866-28b9f77d42c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f80019c567907499b9cb3599a668615e2e961c35a478168a43d2504641b9b5,PodSandboxId:2cd83dfd10414d13b5d47d326812f1b262f302e73931dfb52c405852c454518
a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765344620013071734,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 056ca6ed-0cab-42f7-bffb-24f0785fd003,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5e7c
29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320,PodSandboxId:3380b4d1273bef6f6154c47e1dadd09e5631fc6543ce2cce181b3c0c1c09ad4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765344597013974389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ed88a2-563b-4ee6-9a9a-94669a45bd2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203b77791ed58bed3f
44112a9e6cd0fd2fb732df525a634b3ada0ef925ee220c,PodSandboxId:9061092924ee0fdfef082b98a2aa59defc43ab9ccd0f5374e3dd05a3a81d5354,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765344596045116432,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xwmk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca338a7b-5d2c-4894-a615-0224cddd49ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d9dc695ee76bd24,PodSandboxId:5c479d4bba0cae8bb7cf900777ec470fcb9191f52aa792281fea13aabd9dea07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765344589483389368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lwtl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84b8912-1587-45b3-956c-791ea7ec71c6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"
liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68,PodSandboxId:1b437dd96d11016f50ea95521ddb655effee608acd467d4c5b86608938bce0e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765344588559660009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngpzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c
58eba-0463-42c9-a9d6-3c579349bd49,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9,PodSandboxId:f274a008098638e228e863ee3091aab6eb97a0b43dd678f1969a077a1a9cdd3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765344575603886543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa13790632d350c6
bc30d2faa0b6f981,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948,PodSandboxId:27b140f7dc29d6de2662a56308de71b940b56f046ecec1f25c3af063d060eba9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765344575527641867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0266aeda1eb6dc0732ac0ca983358e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f05e43ec5e70f9d6ff09ee4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f,PodSandboxId:3310898dd2e7bb9fa4eaa4121131d47401c80bff34c6a4866a34165c6378f9b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,Crea
tedAt:1765344575558920700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2613b60cb2b81953748c1f1f1ecd406,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a,PodSandboxId:079499566b48e559bfd84ff27fd1e0980238a3d2a6146a461496ea2fb96ba2fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5
619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765344575523607276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9414453fd34af8fe84f77d6b515bc5e6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7995790-c6f1-4390-a393-c1f6d00f9062 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.804935596Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad41b3b6-c0c7-4afb-89bc-06287b8bc6f5 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.805150042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad41b3b6-c0c7-4afb-89bc-06287b8bc6f5 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.806796185Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=580a0a03-59b1-488b-b753-cf725a31fc04 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.808171098Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765345173808141753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:457883,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=580a0a03-59b1-488b-b753-cf725a31fc04 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.809139151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba244101-b57c-47c2-9038-9cd4f5750727 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.809199367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba244101-b57c-47c2-9038-9cd4f5750727 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.809658074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25e1deb92b90406521b7a79e047d89c517cd035a44e88d682c0941a2f0d94ceb,PodSandboxId:ba29e6b659914c39c0bee679c2c220da4d0fd56110470e69c0b07a5dc5a75614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345033128650784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 390b2e80-0538-4ebe-ae5c-2e24388c48e0,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73d958852375d517ec8e880e264554b882f25961e3b94c83033ce6fdf91dfcd,PodSandboxId:14f0fd53f12b8b26bc3ff88bf56950e4a57714e825ab5e0f66c12e4379f63f4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765344963193852703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 479ad0f6-afd3-427d-9618-0e77a36d2f86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d8e18ef2c2cf6472469d4043a4222f46726b86bcab7610255b480447adb2be,PodSandboxId:f058cab2480c7d799072694b7c8c0fb6ea0de44d988450241dde72510f242833,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765344747838857624,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-qvqbv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60762df1-e222-42ac-8625-e7a791ed54fb,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0132a3156c2350c4ca37566650c76d85be934240cfda6814755e1c70c405a7ba,PodSandboxId:647f18d0c2136d2fb9fb40cbe8ba48376cd429a31d9c024c540caf218fd88bad,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765344669449773465,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6vldh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 183ba921-49cb-457b-b3ca-887a4e18611b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32d43f082316163c31af50b7606ae33e7b63110d400c78ff0162fbcf8fa78c7f,PodSandboxId:d60bc259de4249c983ddff2c11c39172462904a2e9b46f44eb941a3e1c49db0e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765344667539885433,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2c4kn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7d13dd8e-62cb-446b-a718-e15ada0b80a3,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668d0c1d28737b284756a644d3460fac7e536f89dc7c5792d6ff78707639682,PodSandboxId:1e43763de09da6e66f05a5b77ff6a2d0f556cf6c72f5baacc6667687d5ad0c9a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765344666031370479,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vsz96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db81abdfb84eb3c48379ba982a1f44c8e7d2887cfe2d24f6d0d020e96bff32,PodSandboxId:a5a64939f7ce8b3d1d4298014698f9340fd0246eb2520d98dd053abe11a1aaaa,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63c
a56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765344634678579440,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-25pr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945db864-8d9f-4e37-b866-28b9f77d42c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f80019c567907499b9cb3599a668615e2e961c35a478168a43d2504641b9b5,PodSandboxId:2cd83dfd10414d13b5d47d326812f1b262f302e73931dfb52c405852c454518
a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765344620013071734,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 056ca6ed-0cab-42f7-bffb-24f0785fd003,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5e7c
29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320,PodSandboxId:3380b4d1273bef6f6154c47e1dadd09e5631fc6543ce2cce181b3c0c1c09ad4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765344597013974389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ed88a2-563b-4ee6-9a9a-94669a45bd2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203b77791ed58bed3f
44112a9e6cd0fd2fb732df525a634b3ada0ef925ee220c,PodSandboxId:9061092924ee0fdfef082b98a2aa59defc43ab9ccd0f5374e3dd05a3a81d5354,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765344596045116432,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xwmk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca338a7b-5d2c-4894-a615-0224cddd49ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d9dc695ee76bd24,PodSandboxId:5c479d4bba0cae8bb7cf900777ec470fcb9191f52aa792281fea13aabd9dea07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765344589483389368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lwtl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84b8912-1587-45b3-956c-791ea7ec71c6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"
liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68,PodSandboxId:1b437dd96d11016f50ea95521ddb655effee608acd467d4c5b86608938bce0e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765344588559660009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngpzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c
58eba-0463-42c9-a9d6-3c579349bd49,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9,PodSandboxId:f274a008098638e228e863ee3091aab6eb97a0b43dd678f1969a077a1a9cdd3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765344575603886543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa13790632d350c6
bc30d2faa0b6f981,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948,PodSandboxId:27b140f7dc29d6de2662a56308de71b940b56f046ecec1f25c3af063d060eba9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765344575527641867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0266aeda1eb6dc0732ac0ca983358e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f05e43ec5e70f9d6ff09ee4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f,PodSandboxId:3310898dd2e7bb9fa4eaa4121131d47401c80bff34c6a4866a34165c6378f9b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,Crea
tedAt:1765344575558920700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2613b60cb2b81953748c1f1f1ecd406,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a,PodSandboxId:079499566b48e559bfd84ff27fd1e0980238a3d2a6146a461496ea2fb96ba2fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5
619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765344575523607276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9414453fd34af8fe84f77d6b515bc5e6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba244101-b57c-47c2-9038-9cd4f5750727 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.839831722Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0f8a150-e902-4496-b17e-12228b256981 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.840114851Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0f8a150-e902-4496-b17e-12228b256981 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.842788392Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e944a164-2aab-4309-8cb2-0fc01563e01b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.844670109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765345173844641436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:457883,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e944a164-2aab-4309-8cb2-0fc01563e01b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.845744856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62635d96-fbbc-4702-a972-2b4212e199d9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.845959566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62635d96-fbbc-4702-a972-2b4212e199d9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.846896903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25e1deb92b90406521b7a79e047d89c517cd035a44e88d682c0941a2f0d94ceb,PodSandboxId:ba29e6b659914c39c0bee679c2c220da4d0fd56110470e69c0b07a5dc5a75614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345033128650784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 390b2e80-0538-4ebe-ae5c-2e24388c48e0,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73d958852375d517ec8e880e264554b882f25961e3b94c83033ce6fdf91dfcd,PodSandboxId:14f0fd53f12b8b26bc3ff88bf56950e4a57714e825ab5e0f66c12e4379f63f4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765344963193852703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 479ad0f6-afd3-427d-9618-0e77a36d2f86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d8e18ef2c2cf6472469d4043a4222f46726b86bcab7610255b480447adb2be,PodSandboxId:f058cab2480c7d799072694b7c8c0fb6ea0de44d988450241dde72510f242833,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765344747838857624,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-qvqbv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60762df1-e222-42ac-8625-e7a791ed54fb,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0132a3156c2350c4ca37566650c76d85be934240cfda6814755e1c70c405a7ba,PodSandboxId:647f18d0c2136d2fb9fb40cbe8ba48376cd429a31d9c024c540caf218fd88bad,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765344669449773465,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6vldh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 183ba921-49cb-457b-b3ca-887a4e18611b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32d43f082316163c31af50b7606ae33e7b63110d400c78ff0162fbcf8fa78c7f,PodSandboxId:d60bc259de4249c983ddff2c11c39172462904a2e9b46f44eb941a3e1c49db0e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765344667539885433,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2c4kn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7d13dd8e-62cb-446b-a718-e15ada0b80a3,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668d0c1d28737b284756a644d3460fac7e536f89dc7c5792d6ff78707639682,PodSandboxId:1e43763de09da6e66f05a5b77ff6a2d0f556cf6c72f5baacc6667687d5ad0c9a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765344666031370479,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vsz96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db81abdfb84eb3c48379ba982a1f44c8e7d2887cfe2d24f6d0d020e96bff32,PodSandboxId:a5a64939f7ce8b3d1d4298014698f9340fd0246eb2520d98dd053abe11a1aaaa,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63c
a56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765344634678579440,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-25pr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945db864-8d9f-4e37-b866-28b9f77d42c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f80019c567907499b9cb3599a668615e2e961c35a478168a43d2504641b9b5,PodSandboxId:2cd83dfd10414d13b5d47d326812f1b262f302e73931dfb52c405852c454518
a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765344620013071734,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 056ca6ed-0cab-42f7-bffb-24f0785fd003,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5e7c
29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320,PodSandboxId:3380b4d1273bef6f6154c47e1dadd09e5631fc6543ce2cce181b3c0c1c09ad4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765344597013974389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ed88a2-563b-4ee6-9a9a-94669a45bd2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203b77791ed58bed3f
44112a9e6cd0fd2fb732df525a634b3ada0ef925ee220c,PodSandboxId:9061092924ee0fdfef082b98a2aa59defc43ab9ccd0f5374e3dd05a3a81d5354,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765344596045116432,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xwmk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca338a7b-5d2c-4894-a615-0224cddd49ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d9dc695ee76bd24,PodSandboxId:5c479d4bba0cae8bb7cf900777ec470fcb9191f52aa792281fea13aabd9dea07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765344589483389368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lwtl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84b8912-1587-45b3-956c-791ea7ec71c6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"
liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68,PodSandboxId:1b437dd96d11016f50ea95521ddb655effee608acd467d4c5b86608938bce0e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765344588559660009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngpzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c
58eba-0463-42c9-a9d6-3c579349bd49,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9,PodSandboxId:f274a008098638e228e863ee3091aab6eb97a0b43dd678f1969a077a1a9cdd3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765344575603886543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa13790632d350c6
bc30d2faa0b6f981,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948,PodSandboxId:27b140f7dc29d6de2662a56308de71b940b56f046ecec1f25c3af063d060eba9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765344575527641867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0266aeda1eb6dc0732ac0ca983358e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f05e43ec5e70f9d6ff09ee4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f,PodSandboxId:3310898dd2e7bb9fa4eaa4121131d47401c80bff34c6a4866a34165c6378f9b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,Crea
tedAt:1765344575558920700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2613b60cb2b81953748c1f1f1ecd406,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a,PodSandboxId:079499566b48e559bfd84ff27fd1e0980238a3d2a6146a461496ea2fb96ba2fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5
619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765344575523607276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9414453fd34af8fe84f77d6b515bc5e6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62635d96-fbbc-4702-a972-2b4212e199d9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.850856107Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9363d1bb-6082-4b4e-90ce-95d0608e4440 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.851154226Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1afbf99ee862a55fbd8eca5936830e400952649744c0962f36841802dc4fe9fc,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-t67b9,Uid:7f40dad2-5164-4575-a745-f97826e47fed,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345172820085487,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-t67b9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f40dad2-5164-4575-a745-f97826e47fed,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:39:32.499034915Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c395f7ec367cf8ad632d3ddf3b913c9f417fdef9b3dc045e83ef5536a09954c,Metadata:&PodSandboxMetadata{Name:helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042,Uid:f7c4d6ff-87f7-45
e9-a730-2f343d7472fa,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345158693551923,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f7c4d6ff-87f7-45e9-a730-2f343d7472fa,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:39:18.373889906Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ba29e6b659914c39c0bee679c2c220da4d0fd56110470e69c0b07a5dc5a75614,Metadata:&PodSandboxMetadata{Name:nginx,Uid:390b2e80-0538-4ebe-ae5c-2e24388c48e0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345025727185076,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 390b2e80-0538-4ebe-ae5c-2e24388c48e0,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:37:05.40
4943628Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:14f0fd53f12b8b26bc3ff88bf56950e4a57714e825ab5e0f66c12e4379f63f4b,Metadata:&PodSandboxMetadata{Name:busybox,Uid:479ad0f6-afd3-427d-9618-0e77a36d2f86,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344960797434068,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 479ad0f6-afd3-427d-9618-0e77a36d2f86,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:36:00.473208652Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f058cab2480c7d799072694b7c8c0fb6ea0de44d988450241dde72510f242833,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-85d4c799dd-qvqbv,Uid:60762df1-e222-42ac-8625-e7a791ed54fb,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344732865624362,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes
.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-qvqbv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60762df1-e222-42ac-8625-e7a791ed54fb,pod-template-hash: 85d4c799dd,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:29:57.562384479Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2cd83dfd10414d13b5d47d326812f1b262f302e73931dfb52c405852c454518a,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:056ca6ed-0cab-42f7-bffb-24f0785fd003,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344595670477912,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 056ca6ed-0cab-42f7-bffb-24f0785fd003,},Annotations:m
ap[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,ku
bernetes.io/config.seen: 2025-12-10T05:29:54.093515822Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1e43763de09da6e66f05a5b77ff6a2d0f556cf6c72f5baacc6667687d5ad0c9a,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-648f6765c9-vsz96,Uid:5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344595665388121,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vsz96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,pod-template-hash: 648f6765c9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:29:54.666677750Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a5a64939f7ce8b3d1d4298014698f9340fd0246eb2520d98dd053abe11a1aaaa,Metadata:&PodSandboxMetadata{Name:registry-proxy-25pr7,Uid:945db864-8d9f-4e37-b866-28b9f77d42c3,Namespace:kube-system
,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344595504860969,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,controller-revision-hash: 65b944f647,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-proxy-25pr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945db864-8d9f-4e37-b866-28b9f77d42c3,kubernetes.io/minikube-addons: registry,pod-template-generation: 1,registry-proxy: true,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:29:54.273097725Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2cb22f113efb3b0c813b111e484b9ba307c09bbf3c3a0b67063c79e85e76d20b,Metadata:&PodSandboxMetadata{Name:registry-6b586f9694-lkhvn,Uid:0a8387d7-19c7-49cd-8425-48c60f2e70ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344595483407014,Labels:map[string]string{actual-registry: true,addonmanager.kubernetes.io/mode: Reconcile,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-6b586f9694-lkhvn,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a8387d7-19c7-49cd-8425-48c60f2e70ae,kubernetes.io/minikube-addons: registry,pod-template-hash: 6b586f9694,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:29:53.770566453Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3380b4d1273bef6f6154c47e1dadd09e5631fc6543ce2cce181b3c0c1c09ad4f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:76ed88a2-563b-4ee6-9a9a-94669a45bd2a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344595471609847,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ed88a2-563b-4ee6-9a9a-94669a45bd2a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"add
onmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-10T05:29:54.306781577Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9061092924ee0fdfef082b98a2aa59defc43ab9ccd0f5374e3dd05a3a81d5354,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-xwmk6,Uid:ca338a7b-5d2c-4894-a615-0224cddd49ff,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344590914749277,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernete
s.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-xwmk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca338a7b-5d2c-4894-a615-0224cddd49ff,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:29:50.584238948Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c479d4bba0cae8bb7cf900777ec470fcb9191f52aa792281fea13aabd9dea07,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-lwtl7,Uid:d84b8912-1587-45b3-956c-791ea7ec71c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344588377434877,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-lwtl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84b8912-1587-45b3-956c-791ea7ec71c6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:29:47.992625821Z,kubernete
s.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b437dd96d11016f50ea95521ddb655effee608acd467d4c5b86608938bce0e7,Metadata:&PodSandboxMetadata{Name:kube-proxy-ngpzv,Uid:75c58eba-0463-42c9-a9d6-3c579349bd49,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344588140379911,Labels:map[string]string{controller-revision-hash: 55c7cb7b75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ngpzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c58eba-0463-42c9-a9d6-3c579349bd49,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:29:47.793340995Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f274a008098638e228e863ee3091aab6eb97a0b43dd678f1969a077a1a9cdd3d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-819501,Uid:aa13790632d350c6bc30d2faa0b6f981,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344575305129363,Labels:map[string]string{compone
nt: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa13790632d350c6bc30d2faa0b6f981,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aa13790632d350c6bc30d2faa0b6f981,kubernetes.io/config.seen: 2025-12-10T05:29:34.778671541Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:27b140f7dc29d6de2662a56308de71b940b56f046ecec1f25c3af063d060eba9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-819501,Uid:5a0266aeda1eb6dc0732ac0ca983358e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344575293580998,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0266aeda1eb6dc0732ac0ca983358e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/confi
g.hash: 5a0266aeda1eb6dc0732ac0ca983358e,kubernetes.io/config.seen: 2025-12-10T05:29:34.778670395Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:079499566b48e559bfd84ff27fd1e0980238a3d2a6146a461496ea2fb96ba2fb,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-819501,Uid:9414453fd34af8fe84f77d6b515bc5e6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344575286925560,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9414453fd34af8fe84f77d6b515bc5e6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.227:8443,kubernetes.io/config.hash: 9414453fd34af8fe84f77d6b515bc5e6,kubernetes.io/config.seen: 2025-12-10T05:29:34.778669039Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3310898dd2e7bb9fa4eaa4121131d47401c80bff34c6a486
6a34165c6378f9b9,Metadata:&PodSandboxMetadata{Name:etcd-addons-819501,Uid:f2613b60cb2b81953748c1f1f1ecd406,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765344575284934789,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2613b60cb2b81953748c1f1f1ecd406,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.227:2379,kubernetes.io/config.hash: f2613b60cb2b81953748c1f1f1ecd406,kubernetes.io/config.seen: 2025-12-10T05:29:34.778666461Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9363d1bb-6082-4b4e-90ce-95d0608e4440 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.852449868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc85432b-84e8-46e1-8382-2e3318e434e0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.852656722Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc85432b-84e8-46e1-8382-2e3318e434e0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:39:33 addons-819501 crio[812]: time="2025-12-10 05:39:33.853814897Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25e1deb92b90406521b7a79e047d89c517cd035a44e88d682c0941a2f0d94ceb,PodSandboxId:ba29e6b659914c39c0bee679c2c220da4d0fd56110470e69c0b07a5dc5a75614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345033128650784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 390b2e80-0538-4ebe-ae5c-2e24388c48e0,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73d958852375d517ec8e880e264554b882f25961e3b94c83033ce6fdf91dfcd,PodSandboxId:14f0fd53f12b8b26bc3ff88bf56950e4a57714e825ab5e0f66c12e4379f63f4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765344963193852703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 479ad0f6-afd3-427d-9618-0e77a36d2f86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d8e18ef2c2cf6472469d4043a4222f46726b86bcab7610255b480447adb2be,PodSandboxId:f058cab2480c7d799072694b7c8c0fb6ea0de44d988450241dde72510f242833,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765344747838857624,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-qvqbv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60762df1-e222-42ac-8625-e7a791ed54fb,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1668d0c1d28737b284756a644d3460fac7e536f89dc7c5792d6ff78707639682,PodSandboxId:1e43763de09da6e66f05a5b77ff6a2d0f556cf6c72f5baacc6667687d5ad0c9a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765344666031370479,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vsz96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db81abdfb84eb3c48379ba982a1f44c8e7d2887cfe2d24f6d0d020e96bff32,PodSandboxId:a5a64939f7ce8b3d1d4298014698f9340fd0246eb2520d98dd053abe11a1aaaa,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765344634678579440,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-25pr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945db864-8d9f-4e37-b866-28b9f77d42c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f80019c567907499b9cb3599a668615e2e961c35a478168a43d2504641b9b5,PodSandboxId:2cd83dfd10414d13b5d47d326812f1b262f302e73931dfb52c405852c454518a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Imag
e:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765344620013071734,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 056ca6ed-0cab-42f7-bffb-24f0785fd003,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5e7c29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320,PodSandboxId:
3380b4d1273bef6f6154c47e1dadd09e5631fc6543ce2cce181b3c0c1c09ad4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765344597013974389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ed88a2-563b-4ee6-9a9a-94669a45bd2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203b77791ed58bed3f44112a9e6cd0fd2fb732df525a634b3ada0ef925ee220c,PodSandboxId:9061092924ee
0fdfef082b98a2aa59defc43ab9ccd0f5374e3dd05a3a81d5354,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765344596045116432,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xwmk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca338a7b-5d2c-4894-a615-0224cddd49ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d
9dc695ee76bd24,PodSandboxId:5c479d4bba0cae8bb7cf900777ec470fcb9191f52aa792281fea13aabd9dea07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765344589483389368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lwtl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84b8912-1587-45b3-956c-791ea7ec71c6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":
\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68,PodSandboxId:1b437dd96d11016f50ea95521ddb655effee608acd467d4c5b86608938bce0e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765344588559660009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngpzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c58eba-0463-42c9-a9d6-3c579349bd49,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9,PodSandboxId:f274a008098638e228e863ee3091aab6eb97a0b43dd678f1969a077a1a9cdd3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765344575603886543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa13790632d350c6bc30d2faa0b6f981,},Annotations:map[string]string{io.kubernetes.container
.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948,PodSandboxId:27b140f7dc29d6de2662a56308de71b940b56f046ecec1f25c3af063d060eba9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765344575527641867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819501,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 5a0266aeda1eb6dc0732ac0ca983358e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f05e43ec5e70f9d6ff09ee4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f,PodSandboxId:3310898dd2e7bb9fa4eaa4121131d47401c80bff34c6a4866a34165c6378f9b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765344575558920700,Labels:map[string]string{io.kubernetes.contain
er.name: etcd,io.kubernetes.pod.name: etcd-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2613b60cb2b81953748c1f1f1ecd406,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a,PodSandboxId:079499566b48e559bfd84ff27fd1e0980238a3d2a6146a461496ea2fb96ba2fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_
RUNNING,CreatedAt:1765344575523607276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9414453fd34af8fe84f77d6b515bc5e6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc85432b-84e8-46e1-8382-2e3318e434e0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	25e1deb92b904       public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9                           2 minutes ago       Running             nginx                     0                   ba29e6b659914       nginx                                       default
	c73d958852375       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   14f0fd53f12b8       busybox                                     default
	35d8e18ef2c2c       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             7 minutes ago       Running             controller                0                   f058cab2480c7       ingress-nginx-controller-85d4c799dd-qvqbv   ingress-nginx
	0132a3156c235       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   8 minutes ago       Exited              patch                     0                   647f18d0c2136       ingress-nginx-admission-patch-6vldh         ingress-nginx
	32d43f0823161       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   8 minutes ago       Exited              create                    0                   d60bc259de424       ingress-nginx-admission-create-2c4kn        ingress-nginx
	1668d0c1d2873       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             8 minutes ago       Running             local-path-provisioner    0                   1e43763de09da       local-path-provisioner-648f6765c9-vsz96     local-path-storage
	82db81abdfb84       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac              8 minutes ago       Running             registry-proxy            0                   a5a64939f7ce8       registry-proxy-25pr7                        kube-system
	57f80019c5679       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               9 minutes ago       Running             minikube-ingress-dns      0                   2cd83dfd10414       kube-ingress-dns-minikube                   kube-system
	cb5e7c29a0f38       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             9 minutes ago       Running             storage-provisioner       0                   3380b4d1273be       storage-provisioner                         kube-system
	203b77791ed58       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     9 minutes ago       Running             amd-gpu-device-plugin     0                   9061092924ee0       amd-gpu-device-plugin-xwmk6                 kube-system
	56051fcb51898       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             9 minutes ago       Running             coredns                   0                   5c479d4bba0ca       coredns-66bc5c9577-lwtl7                    kube-system
	6bca39dd5c266       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                             9 minutes ago       Running             kube-proxy                0                   1b437dd96d110       kube-proxy-ngpzv                            kube-system
	1326c7547c796       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                             9 minutes ago       Running             kube-scheduler            0                   f274a00809863       kube-scheduler-addons-819501                kube-system
	f05e43ec5e70f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             9 minutes ago       Running             etcd                      0                   3310898dd2e7b       etcd-addons-819501                          kube-system
	7c800fe0c31f2       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                             9 minutes ago       Running             kube-controller-manager   0                   27b140f7dc29d       kube-controller-manager-addons-819501       kube-system
	633a185de0b3b       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                             9 minutes ago       Running             kube-apiserver            0                   079499566b48e       kube-apiserver-addons-819501                kube-system
	
	
	==> coredns [56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d9dc695ee76bd24] <==
	[INFO] 10.244.0.7:40300 - 25084 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00012199s
	[INFO] 10.244.0.7:34807 - 21333 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000160756s
	[INFO] 10.244.0.7:34807 - 18593 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000209267s
	[INFO] 10.244.0.7:34807 - 25313 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000141454s
	[INFO] 10.244.0.7:34807 - 10395 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.002833041s
	[INFO] 10.244.0.7:34807 - 55739 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.001833612s
	[INFO] 10.244.0.7:34807 - 43296 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.001825041s
	[INFO] 10.244.0.7:34807 - 22845 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000315374s
	[INFO] 10.244.0.7:34807 - 24568 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000091034s
	[INFO] 10.244.0.7:60805 - 33934 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000133685s
	[INFO] 10.244.0.7:60805 - 23067 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000312754s
	[INFO] 10.244.0.7:60805 - 62865 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00012618s
	[INFO] 10.244.0.7:60805 - 44347 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000209408s
	[INFO] 10.244.0.7:60805 - 6511 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000071589s
	[INFO] 10.244.0.7:60805 - 5491 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000071794s
	[INFO] 10.244.0.7:60805 - 49212 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000077945s
	[INFO] 10.244.0.7:60805 - 12417 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000289472s
	[INFO] 10.244.0.7:51490 - 53219 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000139169s
	[INFO] 10.244.0.7:51490 - 18728 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000100243s
	[INFO] 10.244.0.7:51490 - 27330 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000090267s
	[INFO] 10.244.0.7:51490 - 59438 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00005688s
	[INFO] 10.244.0.7:51490 - 39092 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000184991s
	[INFO] 10.244.0.7:51490 - 18498 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000131994s
	[INFO] 10.244.0.7:51490 - 3549 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000085215s
	[INFO] 10.244.0.7:51490 - 41158 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000054107s
	
	
	==> describe nodes <==
	Name:               addons-819501
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-819501
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=addons-819501
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T05_29_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-819501
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 05:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-819501
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 05:39:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 05:37:43 +0000   Wed, 10 Dec 2025 05:29:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 05:37:43 +0000   Wed, 10 Dec 2025 05:29:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 05:37:43 +0000   Wed, 10 Dec 2025 05:29:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 05:37:43 +0000   Wed, 10 Dec 2025 05:29:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.227
	  Hostname:    addons-819501
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba6b4ebfa05046a9ba182a04e8831219
	  System UUID:                ba6b4ebf-a050-46a9-ba18-2a04e8831219
	  Boot ID:                    216e7b9f-8c01-493d-bad4-cf3938ee1b07
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  default                     hello-world-app-5d498dc89-t67b9                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-qvqbv                     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         9m37s
	  kube-system                 amd-gpu-device-plugin-xwmk6                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 coredns-66bc5c9577-lwtl7                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m47s
	  kube-system                 etcd-addons-819501                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         9m53s
	  kube-system                 kube-apiserver-addons-819501                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 kube-controller-manager-addons-819501                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kube-system                 kube-proxy-ngpzv                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  kube-system                 kube-scheduler-addons-819501                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 registry-6b586f9694-lkhvn                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	  kube-system                 registry-proxy-25pr7                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  local-path-storage          helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	  local-path-storage          local-path-provisioner-648f6765c9-vsz96                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9m44s              kube-proxy       
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-819501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-819501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-819501 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m53s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m53s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m52s              kubelet          Node addons-819501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m52s              kubelet          Node addons-819501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m52s              kubelet          Node addons-819501 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m52s              kubelet          Node addons-819501 status is now: NodeReady
	  Normal  RegisteredNode           9m48s              node-controller  Node addons-819501 event: Registered Node addons-819501 in Controller
	
	
	==> dmesg <==
	[Dec10 05:31] kauditd_printk_skb: 91 callbacks suppressed
	[  +5.165646] kauditd_printk_skb: 100 callbacks suppressed
	[Dec10 05:32] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.000055] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.804071] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.010431] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.587817] kauditd_printk_skb: 11 callbacks suppressed
	[Dec10 05:36] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.071354] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.011099] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.409106] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.223792] kauditd_printk_skb: 72 callbacks suppressed
	[  +0.701216] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.750190] kauditd_printk_skb: 20 callbacks suppressed
	[  +4.999003] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.624936] kauditd_printk_skb: 22 callbacks suppressed
	[Dec10 05:37] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.180251] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.341207] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000049] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.855665] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.223501] kauditd_printk_skb: 127 callbacks suppressed
	[Dec10 05:38] kauditd_printk_skb: 15 callbacks suppressed
	[Dec10 05:39] kauditd_printk_skb: 9 callbacks suppressed
	[ +13.888203] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f05e43ec5e70f9d6ff09ee4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f] <==
	{"level":"info","ts":"2025-12-10T05:30:28.888718Z","caller":"traceutil/trace.go:172","msg":"trace[488677997] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:992; }","duration":"199.560707ms","start":"2025-12-10T05:30:28.689147Z","end":"2025-12-10T05:30:28.888708Z","steps":["trace[488677997] 'agreement among raft nodes before linearized reading'  (duration: 199.462748ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:30:28.888975Z","caller":"traceutil/trace.go:172","msg":"trace[1011571668] transaction","detail":"{read_only:false; response_revision:992; number_of_response:1; }","duration":"393.086994ms","start":"2025-12-10T05:30:28.495876Z","end":"2025-12-10T05:30:28.888963Z","steps":["trace[1011571668] 'process raft request'  (duration: 392.642759ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:30:28.889069Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T05:30:28.495855Z","time spent":"393.146618ms","remote":"127.0.0.1:46866","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4828,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/yakd-dashboard/yakd-dashboard-5ff678cb9-7csqc\" mod_revision:669 > success:<request_put:<key:\"/registry/pods/yakd-dashboard/yakd-dashboard-5ff678cb9-7csqc\" value_size:4760 >> failure:<request_range:<key:\"/registry/pods/yakd-dashboard/yakd-dashboard-5ff678cb9-7csqc\" > >"}
	{"level":"warn","ts":"2025-12-10T05:30:28.890002Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.047857ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:30:28.890050Z","caller":"traceutil/trace.go:172","msg":"trace[629805527] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:992; }","duration":"200.098103ms","start":"2025-12-10T05:30:28.689944Z","end":"2025-12-10T05:30:28.890042Z","steps":["trace[629805527] 'agreement among raft nodes before linearized reading'  (duration: 200.027767ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:31:14.011871Z","caller":"traceutil/trace.go:172","msg":"trace[1685321480] transaction","detail":"{read_only:false; response_revision:1115; number_of_response:1; }","duration":"232.957389ms","start":"2025-12-10T05:31:13.778902Z","end":"2025-12-10T05:31:14.011860Z","steps":["trace[1685321480] 'process raft request'  (duration: 232.862929ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:31:14.011968Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.972115ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:31:14.011995Z","caller":"traceutil/trace.go:172","msg":"trace[2111277061] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1115; }","duration":"108.022783ms","start":"2025-12-10T05:31:13.903967Z","end":"2025-12-10T05:31:14.011990Z","steps":["trace[2111277061] 'agreement among raft nodes before linearized reading'  (duration: 107.945145ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:31:14.011799Z","caller":"traceutil/trace.go:172","msg":"trace[1752202248] linearizableReadLoop","detail":"{readStateIndex:1150; appliedIndex:1150; }","duration":"107.780261ms","start":"2025-12-10T05:31:13.903991Z","end":"2025-12-10T05:31:14.011771Z","steps":["trace[1752202248] 'read index received'  (duration: 107.774382ms)","trace[1752202248] 'applied index is now lower than readState.Index'  (duration: 5.187µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T05:32:25.305922Z","caller":"traceutil/trace.go:172","msg":"trace[1251573471] linearizableReadLoop","detail":"{readStateIndex:1319; appliedIndex:1319; }","duration":"121.87535ms","start":"2025-12-10T05:32:25.184006Z","end":"2025-12-10T05:32:25.305882Z","steps":["trace[1251573471] 'read index received'  (duration: 121.869174ms)","trace[1251573471] 'applied index is now lower than readState.Index'  (duration: 5.013µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:32:25.306150Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.098039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:32:25.306183Z","caller":"traceutil/trace.go:172","msg":"trace[135230896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1269; }","duration":"122.173364ms","start":"2025-12-10T05:32:25.184001Z","end":"2025-12-10T05:32:25.306174Z","steps":["trace[135230896] 'agreement among raft nodes before linearized reading'  (duration: 122.073487ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:32:25.306204Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.992998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:32:25.306322Z","caller":"traceutil/trace.go:172","msg":"trace[181149218] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1269; }","duration":"122.021433ms","start":"2025-12-10T05:32:25.184200Z","end":"2025-12-10T05:32:25.306222Z","steps":["trace[181149218] 'agreement among raft nodes before linearized reading'  (duration: 121.979708ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:32:25.306014Z","caller":"traceutil/trace.go:172","msg":"trace[1094354552] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"172.845984ms","start":"2025-12-10T05:32:25.133156Z","end":"2025-12-10T05:32:25.306002Z","steps":["trace[1094354552] 'process raft request'  (duration: 172.745468ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:32:25.306478Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.446698ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:32:25.306494Z","caller":"traceutil/trace.go:172","msg":"trace[180843504] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1269; }","duration":"112.467638ms","start":"2025-12-10T05:32:25.194022Z","end":"2025-12-10T05:32:25.306490Z","steps":["trace[180843504] 'agreement among raft nodes before linearized reading'  (duration: 112.436939ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:32:38.226558Z","caller":"traceutil/trace.go:172","msg":"trace[587922016] transaction","detail":"{read_only:false; response_revision:1324; number_of_response:1; }","duration":"227.161495ms","start":"2025-12-10T05:32:37.999377Z","end":"2025-12-10T05:32:38.226539Z","steps":["trace[587922016] 'process raft request'  (duration: 226.986721ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:36:40.519985Z","caller":"traceutil/trace.go:172","msg":"trace[1445196895] linearizableReadLoop","detail":"{readStateIndex:1981; appliedIndex:1981; }","duration":"234.877958ms","start":"2025-12-10T05:36:40.285064Z","end":"2025-12-10T05:36:40.519942Z","steps":["trace[1445196895] 'read index received'  (duration: 234.872722ms)","trace[1445196895] 'applied index is now lower than readState.Index'  (duration: 4.502µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:36:40.520340Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"235.154055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:36:40.520246Z","caller":"traceutil/trace.go:172","msg":"trace[297048297] transaction","detail":"{read_only:false; response_revision:1873; number_of_response:1; }","duration":"262.670866ms","start":"2025-12-10T05:36:40.257562Z","end":"2025-12-10T05:36:40.520233Z","steps":["trace[297048297] 'process raft request'  (duration: 262.516618ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:36:40.520381Z","caller":"traceutil/trace.go:172","msg":"trace[281380984] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1872; }","duration":"235.311996ms","start":"2025-12-10T05:36:40.285059Z","end":"2025-12-10T05:36:40.520371Z","steps":["trace[281380984] 'agreement among raft nodes before linearized reading'  (duration: 235.122733ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:36:40.520595Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.849704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:36:40.520613Z","caller":"traceutil/trace.go:172","msg":"trace[1952149145] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1873; }","duration":"150.87196ms","start":"2025-12-10T05:36:40.369736Z","end":"2025-12-10T05:36:40.520608Z","steps":["trace[1952149145] 'agreement among raft nodes before linearized reading'  (duration: 150.835739ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:36:55.773886Z","caller":"traceutil/trace.go:172","msg":"trace[380006586] transaction","detail":"{read_only:false; response_revision:1936; number_of_response:1; }","duration":"170.469213ms","start":"2025-12-10T05:36:55.603390Z","end":"2025-12-10T05:36:55.773859Z","steps":["trace[380006586] 'process raft request'  (duration: 169.093512ms)"],"step_count":1}
	
	
	==> kernel <==
	 05:39:34 up 10 min,  0 users,  load average: 0.46, 0.78, 0.60
	Linux addons-819501 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a] <==
	W1210 05:30:25.400790       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 05:30:25.400807       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 05:30:25.402016       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 05:30:25.766164       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 05:36:09.659812       1 conn.go:339] Error on socket receive: read tcp 192.168.50.227:8443->192.168.50.1:43450: use of closed network connection
	E1210 05:36:09.874699       1 conn.go:339] Error on socket receive: read tcp 192.168.50.227:8443->192.168.50.1:43486: use of closed network connection
	I1210 05:36:19.186903       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.102.154"}
	I1210 05:36:26.827800       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1210 05:37:05.234793       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1210 05:37:05.460234       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.93.167"}
	I1210 05:37:20.960481       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1210 05:37:37.370348       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.370420       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:37:37.400032       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.400095       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:37:37.440464       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.440520       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:37:37.462791       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.462856       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:37:37.487024       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.487088       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1210 05:37:38.441153       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1210 05:37:38.487618       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1210 05:37:38.518385       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1210 05:39:32.596673       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.118.237"}
	
	
	==> kube-controller-manager [7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948] <==
	E1210 05:37:46.847339       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:37:46.848446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1210 05:37:46.892044       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 05:37:46.892090       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1210 05:37:54.367233       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:37:54.368226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:37:55.014164       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:37:55.015511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:37:55.569382       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:37:55.570442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1210 05:38:01.527872       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	E1210 05:38:11.341678       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:38:11.342892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:38:14.574994       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:38:14.576079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:38:19.595205       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:38:19.596663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:38:45.549579       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:38:45.552047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:38:49.253227       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:38:49.254692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:39:02.992372       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:39:02.993613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:39:28.832479       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:39:28.833625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68] <==
	I1210 05:29:49.296578       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 05:29:49.398048       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 05:29:49.398087       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.227"]
	E1210 05:29:49.398156       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 05:29:49.810599       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 05:29:49.810670       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 05:29:49.810702       1 server_linux.go:132] "Using iptables Proxier"
	I1210 05:29:49.903119       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 05:29:49.904430       1 server.go:527] "Version info" version="v1.34.3"
	I1210 05:29:49.904446       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:29:49.914988       1 config.go:200] "Starting service config controller"
	I1210 05:29:49.915001       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 05:29:49.915020       1 config.go:106] "Starting endpoint slice config controller"
	I1210 05:29:49.915023       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 05:29:49.915033       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 05:29:49.915036       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 05:29:49.917483       1 config.go:309] "Starting node config controller"
	I1210 05:29:49.917744       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 05:29:49.917814       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 05:29:50.016054       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 05:29:50.016117       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 05:29:50.016149       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9] <==
	I1210 05:29:39.754860       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:29:39.763116       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:29:39.763203       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:29:39.764832       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 05:29:39.765071       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 05:29:39.765817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 05:29:39.769470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 05:29:39.772002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:29:39.772200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:29:39.772466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 05:29:39.772723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 05:29:39.773020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 05:29:39.773348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 05:29:39.773463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:29:39.773482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 05:29:39.773493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 05:29:39.777582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:29:39.777709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:29:39.777771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 05:29:39.778948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 05:29:39.779052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 05:29:39.779103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 05:29:39.779170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:29:39.779217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1210 05:29:41.063910       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 05:38:46 addons-819501 kubelet[2254]: I1210 05:38:46.997031    2254 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/ac05eea5-fa7f-49aa-8eeb-5eec461a36d5-data\") on node \"addons-819501\" DevicePath \"\""
	Dec 10 05:38:46 addons-819501 kubelet[2254]: I1210 05:38:46.997040    2254 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/ac05eea5-fa7f-49aa-8eeb-5eec461a36d5-script\") on node \"addons-819501\" DevicePath \"\""
	Dec 10 05:38:47 addons-819501 kubelet[2254]: I1210 05:38:47.856191    2254 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac05eea5-fa7f-49aa-8eeb-5eec461a36d5" path="/var/lib/kubelet/pods/ac05eea5-fa7f-49aa-8eeb-5eec461a36d5/volumes"
	Dec 10 05:38:48 addons-819501 kubelet[2254]: I1210 05:38:48.850241    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-lkhvn" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:38:48 addons-819501 kubelet[2254]: E1210 05:38:48.852175    2254 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-lkhvn" podUID="0a8387d7-19c7-49cd-8425-48c60f2e70ae"
	Dec 10 05:38:52 addons-819501 kubelet[2254]: E1210 05:38:52.241147    2254 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345132240750128  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:38:52 addons-819501 kubelet[2254]: E1210 05:38:52.241169    2254 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345132240750128  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:39:02 addons-819501 kubelet[2254]: E1210 05:39:02.245845    2254 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345142245347189  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:39:02 addons-819501 kubelet[2254]: E1210 05:39:02.245891    2254 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345142245347189  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:39:02 addons-819501 kubelet[2254]: I1210 05:39:02.850159    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xwmk6" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:39:03 addons-819501 kubelet[2254]: I1210 05:39:03.850157    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-lkhvn" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:39:03 addons-819501 kubelet[2254]: E1210 05:39:03.853464    2254 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-lkhvn" podUID="0a8387d7-19c7-49cd-8425-48c60f2e70ae"
	Dec 10 05:39:12 addons-819501 kubelet[2254]: E1210 05:39:12.249933    2254 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345152249472572  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:39:12 addons-819501 kubelet[2254]: E1210 05:39:12.249978    2254 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345152249472572  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:39:16 addons-819501 kubelet[2254]: I1210 05:39:16.850788    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-lkhvn" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:39:18 addons-819501 kubelet[2254]: I1210 05:39:18.444843    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnr8v\" (UniqueName: \"kubernetes.io/projected/f7c4d6ff-87f7-45e9-a730-2f343d7472fa-kube-api-access-rnr8v\") pod \"helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042\" (UID: \"f7c4d6ff-87f7-45e9-a730-2f343d7472fa\") " pod="local-path-storage/helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042"
	Dec 10 05:39:18 addons-819501 kubelet[2254]: I1210 05:39:18.444916    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f7c4d6ff-87f7-45e9-a730-2f343d7472fa-script\") pod \"helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042\" (UID: \"f7c4d6ff-87f7-45e9-a730-2f343d7472fa\") " pod="local-path-storage/helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042"
	Dec 10 05:39:18 addons-819501 kubelet[2254]: I1210 05:39:18.444939    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f7c4d6ff-87f7-45e9-a730-2f343d7472fa-data\") pod \"helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042\" (UID: \"f7c4d6ff-87f7-45e9-a730-2f343d7472fa\") " pod="local-path-storage/helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042"
	Dec 10 05:39:20 addons-819501 kubelet[2254]: I1210 05:39:20.850579    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-25pr7" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:39:22 addons-819501 kubelet[2254]: E1210 05:39:22.252892    2254 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345162252064208  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:39:22 addons-819501 kubelet[2254]: E1210 05:39:22.252918    2254 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345162252064208  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:39:24 addons-819501 kubelet[2254]: I1210 05:39:24.850811    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:39:32 addons-819501 kubelet[2254]: E1210 05:39:32.256049    2254 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345172255631000  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:39:32 addons-819501 kubelet[2254]: E1210 05:39:32.256076    2254 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345172255631000  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:39:32 addons-819501 kubelet[2254]: I1210 05:39:32.658881    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbf7r\" (UniqueName: \"kubernetes.io/projected/7f40dad2-5164-4575-a745-f97826e47fed-kube-api-access-bbf7r\") pod \"hello-world-app-5d498dc89-t67b9\" (UID: \"7f40dad2-5164-4575-a745-f97826e47fed\") " pod="default/hello-world-app-5d498dc89-t67b9"
	
	
	==> storage-provisioner [cb5e7c29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320] <==
	W1210 05:39:09.947193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:11.951018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:11.957446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:13.961558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:13.967775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:15.972086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:15.977836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:17.981928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:17.990564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:19.994667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:20.000916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:22.004680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:22.010671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:24.014496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:24.020767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:26.025083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:26.035192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:28.039818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:28.046542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:30.050674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:30.056561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:32.060407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:32.065824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:34.074339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:39:34.083926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-819501 -n addons-819501
helpers_test.go:270: (dbg) Run:  kubectl --context addons-819501 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-t67b9 test-local-path ingress-nginx-admission-create-2c4kn ingress-nginx-admission-patch-6vldh registry-6b586f9694-lkhvn helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-819501 describe pod hello-world-app-5d498dc89-t67b9 test-local-path ingress-nginx-admission-create-2c4kn ingress-nginx-admission-patch-6vldh registry-6b586f9694-lkhvn helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-819501 describe pod hello-world-app-5d498dc89-t67b9 test-local-path ingress-nginx-admission-create-2c4kn ingress-nginx-admission-patch-6vldh registry-6b586f9694-lkhvn helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042: exit status 1 (87.631566ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-t67b9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-819501/192.168.50.227
	Start Time:       Wed, 10 Dec 2025 05:39:32 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bbf7r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bbf7r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-t67b9 to addons-819501
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v4jq9 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-v4jq9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2c4kn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6vldh" not found
	Error from server (NotFound): pods "registry-6b586f9694-lkhvn" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-819501 describe pod hello-world-app-5d498dc89-t67b9 test-local-path ingress-nginx-admission-create-2c4kn ingress-nginx-admission-patch-6vldh registry-6b586f9694-lkhvn helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-819501 addons disable ingress-dns --alsologtostderr -v=1: (1.160552515s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-819501 addons disable ingress --alsologtostderr -v=1: (7.906506661s)
--- FAIL: TestAddons/parallel/Ingress (159.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (302.65s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-819501 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-819501 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-819501 -n addons-819501
helpers_test.go:253: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-819501 logs -n 25: (1.276701214s)
helpers_test.go:261: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-140393                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-140393 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-829998                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-829998 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-160810                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-160810 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ start   │ --download-only -p binary-mirror-177372 --alsologtostderr --binary-mirror http://127.0.0.1:39073 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-177372 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ -p binary-mirror-177372                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-177372 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ addons  │ disable dashboard -p addons-819501                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-819501                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ start   │ -p addons-819501 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:35 UTC │
	│ addons  │ addons-819501 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:35 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ enable headlamp -p addons-819501 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:36 UTC │
	│ addons  │ addons-819501 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:36 UTC │ 10 Dec 25 05:37 UTC │
	│ ssh     │ addons-819501 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │                     │
	│ addons  │ addons-819501 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ addons  │ addons-819501 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ addons  │ addons-819501 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-819501                                                                                                                                                                                                                                                                                                                                                                                         │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ addons  │ addons-819501 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:37 UTC │ 10 Dec 25 05:37 UTC │
	│ ip      │ addons-819501 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ addons  │ addons-819501 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ addons  │ addons-819501 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-819501        │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:28:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:28:52.105088  248270 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:28:52.105179  248270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:52.105184  248270 out.go:374] Setting ErrFile to fd 2...
	I1210 05:28:52.105188  248270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:52.105358  248270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 05:28:52.105849  248270 out.go:368] Setting JSON to false
	I1210 05:28:52.106664  248270 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25879,"bootTime":1765318653,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:28:52.106723  248270 start.go:143] virtualization: kvm guest
	I1210 05:28:52.108609  248270 out.go:179] * [addons-819501] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:28:52.110355  248270 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:28:52.110393  248270 notify.go:221] Checking for updates...
	I1210 05:28:52.112643  248270 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:28:52.114625  248270 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 05:28:52.115949  248270 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:28:52.117122  248270 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:28:52.118420  248270 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:28:52.119836  248270 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:28:52.150913  248270 out.go:179] * Using the kvm2 driver based on user configuration
	I1210 05:28:52.152295  248270 start.go:309] selected driver: kvm2
	I1210 05:28:52.152312  248270 start.go:927] validating driver "kvm2" against <nil>
	I1210 05:28:52.152325  248270 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:28:52.153083  248270 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:28:52.153343  248270 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:28:52.153369  248270 cni.go:84] Creating CNI manager for ""
	I1210 05:28:52.153432  248270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:28:52.153449  248270 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 05:28:52.153504  248270 start.go:353] cluster config:
	{Name:addons-819501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1210 05:28:52.153618  248270 iso.go:125] acquiring lock: {Name:mkd598cf63ca899d26ff5ae5308f8a58215a80b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.155323  248270 out.go:179] * Starting "addons-819501" primary control-plane node in "addons-819501" cluster
	I1210 05:28:52.156436  248270 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 05:28:52.175813  248270 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1210 05:28:52.189575  248270 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 05:28:52.189954  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:52.189997  248270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/config.json ...
	I1210 05:28:52.190028  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/config.json: {Name:mk888fb8e14ee6a18b9f0bd32a9670b388cb1bbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:28:52.190232  248270 start.go:360] acquireMachinesLock for addons-819501: {Name:mk2161deb194f56aae2b0559c12fd0eb56fd317d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 05:28:52.190306  248270 start.go:364] duration metric: took 53.257µs to acquireMachinesLock for "addons-819501"
	I1210 05:28:52.190335  248270 start.go:93] Provisioning new machine with config: &{Name:addons-819501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:28:52.190423  248270 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 05:28:52.192350  248270 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1210 05:28:52.192578  248270 start.go:159] libmachine.API.Create for "addons-819501" (driver="kvm2")
	I1210 05:28:52.192614  248270 client.go:173] LocalClient.Create starting
	I1210 05:28:52.192740  248270 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem
	I1210 05:28:52.332984  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:52.335651  248270 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem
	I1210 05:28:52.470748  248270 main.go:143] libmachine: creating domain...
	I1210 05:28:52.470771  248270 main.go:143] libmachine: creating network...
	I1210 05:28:52.472517  248270 main.go:143] libmachine: found existing default network
	I1210 05:28:52.472684  248270 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 05:28:52.473207  248270 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:1f:09} reservation:<nil>}
	I1210 05:28:52.473578  248270 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cefc30}
	I1210 05:28:52.473663  248270 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-819501</name>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 05:28:52.479464  248270 main.go:143] libmachine: creating private network mk-addons-819501 192.168.50.0/24...
	I1210 05:28:52.482923  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:28:52.558222  248270 main.go:143] libmachine: private network mk-addons-819501 192.168.50.0/24 created
	I1210 05:28:52.558553  248270 main.go:143] libmachine: <network>
	  <name>mk-addons-819501</name>
	  <uuid>c2bdce80-7332-4fd7-b021-02079a969afe</uuid>
	  <bridge name='virbr2' stp='on' delay='0'/>
	  <mac address='52:54:00:ec:cf:14'/>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 05:28:52.558584  248270 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501 ...
	I1210 05:28:52.558604  248270 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22094-243461/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1210 05:28:52.558615  248270 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:28:52.558685  248270 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22094-243461/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22094-243461/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1210 05:28:52.641062  248270 cache.go:107] acquiring lock: {Name:mk4f601fcccaa8421d9a471640a96feb5df57ae1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641095  248270 cache.go:107] acquiring lock: {Name:mka12e8a345a6dc24c0da40f31d69a169b73fc8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641054  248270 cache.go:107] acquiring lock: {Name:mk8a2b7c7103ad9b74ce0f1af971a5d8da1c8f6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641134  248270 cache.go:107] acquiring lock: {Name:mk72740fe8a4d4eb6e3ad18d28ff308f87f86eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641137  248270 cache.go:107] acquiring lock: {Name:mkc558d20fc07b350030510216ebcf1d2df4b57b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641148  248270 cache.go:107] acquiring lock: {Name:mkca46313d0e39171add494fd1f96b98422fb511 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641203  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 05:28:52.641216  248270 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 161.132µs
	I1210 05:28:52.641226  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:28:52.641234  248270 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 05:28:52.641227  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 05:28:52.641058  248270 cache.go:107] acquiring lock: {Name:mkc561f0208895e5efe372932a5a00136ddcb2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641242  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 05:28:52.641248  248270 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 118.214µs
	I1210 05:28:52.641254  248270 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 166.04µs
	I1210 05:28:52.641260  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 05:28:52.641263  248270 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 05:28:52.641265  248270 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 05:28:52.641279  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 05:28:52.641279  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 05:28:52.641279  248270 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 163.887µs
	I1210 05:28:52.641287  248270 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 246.182µs
	I1210 05:28:52.641301  248270 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 05:28:52.641294  248270 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 05:28:52.641293  248270 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 160.457µs
	I1210 05:28:52.641270  248270 cache.go:107] acquiring lock: {Name:mk2d5c3355eb914434f77fe8a549e7e27d61d8ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:52.641237  248270 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 216.984µs
	I1210 05:28:52.641445  248270 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:28:52.641457  248270 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 230.713µs
	I1210 05:28:52.641467  248270 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:28:52.641313  248270 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 05:28:52.641463  248270 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:28:52.641504  248270 cache.go:87] Successfully saved all images to host disk.
	I1210 05:28:52.824390  248270 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa...
	I1210 05:28:52.868316  248270 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/addons-819501.rawdisk...
	I1210 05:28:52.868386  248270 main.go:143] libmachine: Writing magic tar header
	I1210 05:28:52.868426  248270 main.go:143] libmachine: Writing SSH key tar header
	I1210 05:28:52.868507  248270 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501 ...
	I1210 05:28:52.868575  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501
	I1210 05:28:52.868607  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501 (perms=drwx------)
	I1210 05:28:52.868619  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube/machines
	I1210 05:28:52.868633  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube/machines (perms=drwxr-xr-x)
	I1210 05:28:52.868644  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:28:52.868656  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube (perms=drwxr-xr-x)
	I1210 05:28:52.868664  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461
	I1210 05:28:52.868674  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461 (perms=drwxrwxr-x)
	I1210 05:28:52.868688  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1210 05:28:52.868698  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 05:28:52.868706  248270 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1210 05:28:52.868716  248270 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 05:28:52.868725  248270 main.go:143] libmachine: checking permissions on dir: /home
	I1210 05:28:52.868733  248270 main.go:143] libmachine: skipping /home - not owner
	I1210 05:28:52.868738  248270 main.go:143] libmachine: defining domain...
	I1210 05:28:52.870229  248270 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-819501</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/addons-819501.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-819501'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1210 05:28:52.875406  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:bd:43:bd in network default
	I1210 05:28:52.876104  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:52.876118  248270 main.go:143] libmachine: starting domain...
	I1210 05:28:52.876122  248270 main.go:143] libmachine: ensuring networks are active...
	I1210 05:28:52.877154  248270 main.go:143] libmachine: Ensuring network default is active
	I1210 05:28:52.877579  248270 main.go:143] libmachine: Ensuring network mk-addons-819501 is active
	I1210 05:28:52.878149  248270 main.go:143] libmachine: getting domain XML...
	I1210 05:28:52.879236  248270 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-819501</name>
	  <uuid>ba6b4ebf-a050-46a9-ba18-2a04e8831219</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/addons-819501.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:0b:26:32'/>
	      <source network='mk-addons-819501'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:bd:43:bd'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1210 05:28:54.168962  248270 main.go:143] libmachine: waiting for domain to start...
	I1210 05:28:54.170546  248270 main.go:143] libmachine: domain is now running
	I1210 05:28:54.170570  248270 main.go:143] libmachine: waiting for IP...
	I1210 05:28:54.171414  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:54.172058  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:54.172073  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:54.172400  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:54.172444  248270 retry.go:31] will retry after 204.150227ms: waiting for domain to come up
	I1210 05:28:54.378048  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:54.378807  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:54.378824  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:54.379142  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:54.379187  248270 retry.go:31] will retry after 336.586353ms: waiting for domain to come up
	I1210 05:28:54.717782  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:54.718612  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:54.718630  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:54.719044  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:54.719085  248270 retry.go:31] will retry after 427.236784ms: waiting for domain to come up
	I1210 05:28:55.147903  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:55.148695  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:55.148717  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:55.149130  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:55.149170  248270 retry.go:31] will retry after 496.970231ms: waiting for domain to come up
	I1210 05:28:55.648236  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:55.648976  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:55.648993  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:55.649385  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:55.649419  248270 retry.go:31] will retry after 685.299323ms: waiting for domain to come up
	I1210 05:28:56.336314  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:56.336946  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:56.336962  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:56.337319  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:56.337366  248270 retry.go:31] will retry after 806.287256ms: waiting for domain to come up
	I1210 05:28:57.145591  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:57.146271  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:57.146294  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:57.146653  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:57.146702  248270 retry.go:31] will retry after 821.107194ms: waiting for domain to come up
	I1210 05:28:57.969805  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:57.970505  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:57.970524  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:57.970852  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:57.970909  248270 retry.go:31] will retry after 1.109916147s: waiting for domain to come up
	I1210 05:28:59.082244  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:28:59.082858  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:28:59.082893  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:28:59.083281  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:28:59.083325  248270 retry.go:31] will retry after 1.728427418s: waiting for domain to come up
	I1210 05:29:00.814529  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:00.815344  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:29:00.815363  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:29:00.815773  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:29:00.815820  248270 retry.go:31] will retry after 1.517793987s: waiting for domain to come up
	I1210 05:29:02.335622  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:02.336400  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:29:02.336422  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:29:02.336895  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:29:02.336945  248270 retry.go:31] will retry after 2.6142192s: waiting for domain to come up
	I1210 05:29:04.954635  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:04.955354  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:29:04.955379  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:29:04.955714  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:29:04.955755  248270 retry.go:31] will retry after 2.739648926s: waiting for domain to come up
	I1210 05:29:07.696760  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:07.697527  248270 main.go:143] libmachine: no network interface addresses found for domain addons-819501 (source=lease)
	I1210 05:29:07.697545  248270 main.go:143] libmachine: trying to list again with source=arp
	I1210 05:29:07.697920  248270 main.go:143] libmachine: unable to find current IP address of domain addons-819501 in network mk-addons-819501 (interfaces detected: [])
	I1210 05:29:07.697964  248270 retry.go:31] will retry after 2.936432251s: waiting for domain to come up
	I1210 05:29:10.638105  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.638831  248270 main.go:143] libmachine: domain addons-819501 has current primary IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.638865  248270 main.go:143] libmachine: found domain IP: 192.168.50.227
	I1210 05:29:10.638889  248270 main.go:143] libmachine: reserving static IP address...
	I1210 05:29:10.639331  248270 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-819501", mac: "52:54:00:0b:26:32", ip: "192.168.50.227"} in network mk-addons-819501
	I1210 05:29:10.837647  248270 main.go:143] libmachine: reserved static IP address 192.168.50.227 for domain addons-819501
	I1210 05:29:10.837674  248270 main.go:143] libmachine: waiting for SSH...
	I1210 05:29:10.837683  248270 main.go:143] libmachine: Getting to WaitForSSH function...
	I1210 05:29:10.841998  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.842734  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:10.842776  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.843052  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:10.843817  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:10.843851  248270 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1210 05:29:10.953140  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:29:10.953577  248270 main.go:143] libmachine: domain creation complete
	I1210 05:29:10.955178  248270 machine.go:94] provisionDockerMachine start ...
	I1210 05:29:10.957672  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.958111  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:10.958134  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:10.958334  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:10.958541  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:10.958552  248270 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:29:11.063469  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 05:29:11.063510  248270 buildroot.go:166] provisioning hostname "addons-819501"
	I1210 05:29:11.066851  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.067359  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.067386  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.067581  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:11.067818  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:11.067836  248270 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-819501 && echo "addons-819501" | sudo tee /etc/hostname
	I1210 05:29:11.191238  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-819501
	
	I1210 05:29:11.194283  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.194634  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.194662  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.194813  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:11.195030  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:11.195045  248270 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-819501' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-819501/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-819501' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:29:11.310332  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:29:11.310364  248270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22094-243461/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-243461/.minikube}
	I1210 05:29:11.310433  248270 buildroot.go:174] setting up certificates
	I1210 05:29:11.310448  248270 provision.go:84] configureAuth start
	I1210 05:29:11.314015  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.314505  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.314528  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.317045  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.317504  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.317533  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.317778  248270 provision.go:143] copyHostCerts
	I1210 05:29:11.317897  248270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/ca.pem (1082 bytes)
	I1210 05:29:11.318079  248270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/cert.pem (1123 bytes)
	I1210 05:29:11.318163  248270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/key.pem (1675 bytes)
	I1210 05:29:11.318221  248270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem org=jenkins.addons-819501 san=[127.0.0.1 192.168.50.227 addons-819501 localhost minikube]
	I1210 05:29:11.380449  248270 provision.go:177] copyRemoteCerts
	I1210 05:29:11.380516  248270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:29:11.383191  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.383530  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.383557  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.383724  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:11.468790  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 05:29:11.501764  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 05:29:11.536197  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:29:11.569192  248270 provision.go:87] duration metric: took 258.704158ms to configureAuth
	I1210 05:29:11.569224  248270 buildroot.go:189] setting minikube options for container-runtime
	I1210 05:29:11.569456  248270 config.go:182] Loaded profile config "addons-819501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:11.572768  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.573263  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.573289  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.573596  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:11.573815  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:11.573830  248270 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 05:29:11.833231  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 05:29:11.833265  248270 machine.go:97] duration metric: took 878.061601ms to provisionDockerMachine
	I1210 05:29:11.833278  248270 client.go:176] duration metric: took 19.640654056s to LocalClient.Create
	I1210 05:29:11.833288  248270 start.go:167] duration metric: took 19.640714044s to libmachine.API.Create "addons-819501"
	I1210 05:29:11.833300  248270 start.go:293] postStartSetup for "addons-819501" (driver="kvm2")
	I1210 05:29:11.833326  248270 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:29:11.833399  248270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:29:11.836778  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.837269  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.837308  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.837481  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:11.922086  248270 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:29:11.927731  248270 info.go:137] Remote host: Buildroot 2025.02
	I1210 05:29:11.927773  248270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-243461/.minikube/addons for local assets ...
	I1210 05:29:11.927871  248270 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-243461/.minikube/files for local assets ...
	I1210 05:29:11.927934  248270 start.go:296] duration metric: took 94.612566ms for postStartSetup
	I1210 05:29:11.931495  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.931980  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.932019  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.932307  248270 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/config.json ...
	I1210 05:29:11.932540  248270 start.go:128] duration metric: took 19.74210366s to createHost
	I1210 05:29:11.934767  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.935144  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:11.935166  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:11.935324  248270 main.go:143] libmachine: Using SSH client type: native
	I1210 05:29:11.935513  248270 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.227 22 <nil> <nil>}
	I1210 05:29:11.935522  248270 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 05:29:12.046287  248270 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765344552.004969125
	
	I1210 05:29:12.046317  248270 fix.go:216] guest clock: 1765344552.004969125
	I1210 05:29:12.046328  248270 fix.go:229] Guest: 2025-12-10 05:29:12.004969125 +0000 UTC Remote: 2025-12-10 05:29:11.932556032 +0000 UTC m=+19.877288748 (delta=72.413093ms)
	I1210 05:29:12.046353  248270 fix.go:200] guest clock delta is within tolerance: 72.413093ms
	I1210 05:29:12.046359  248270 start.go:83] releasing machines lock for "addons-819501", held for 19.85604026s
	I1210 05:29:12.049360  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.049703  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:12.049730  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.050472  248270 ssh_runner.go:195] Run: cat /version.json
	I1210 05:29:12.050505  248270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:29:12.053634  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.054149  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:12.054174  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.054210  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.054370  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:12.054796  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:12.054838  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:12.055088  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:12.153950  248270 ssh_runner.go:195] Run: systemctl --version
	I1210 05:29:12.161170  248270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 05:29:12.329523  248270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:29:12.337761  248270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:29:12.337846  248270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:29:12.363822  248270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 05:29:12.363854  248270 start.go:496] detecting cgroup driver to use...
	I1210 05:29:12.363953  248270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:29:12.391660  248270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:29:12.411256  248270 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:29:12.411332  248270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:29:12.430231  248270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:29:12.447813  248270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:29:12.603440  248270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:29:12.827553  248270 docker.go:234] disabling docker service ...
	I1210 05:29:12.827647  248270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:29:12.846039  248270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:29:12.862361  248270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:29:13.020176  248270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:29:13.164368  248270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:29:13.182024  248270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:29:13.206545  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:13.357154  248270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 05:29:13.357230  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.371402  248270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 05:29:13.371473  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.385362  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.398751  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.412259  248270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:29:13.426396  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.440016  248270 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.462382  248270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:29:13.476454  248270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:29:13.487470  248270 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 05:29:13.487559  248270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 05:29:13.511008  248270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:29:13.525764  248270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:13.668661  248270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 05:29:13.804341  248270 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 05:29:13.804461  248270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 05:29:13.811120  248270 start.go:564] Will wait 60s for crictl version
	I1210 05:29:13.811237  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:13.816221  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 05:29:13.855240  248270 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 05:29:13.855361  248270 ssh_runner.go:195] Run: crio --version
	I1210 05:29:13.886038  248270 ssh_runner.go:195] Run: crio --version
	I1210 05:29:13.919951  248270 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1210 05:29:13.923902  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:13.924339  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:13.924363  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:13.924587  248270 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1210 05:29:13.929723  248270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:29:13.945980  248270 kubeadm.go:884] updating cluster {Name:addons-819501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
3 ClusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.227 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:29:13.946170  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:14.110289  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:14.252203  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:14.408812  248270 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 05:29:14.408919  248270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:29:14.443222  248270 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1210 05:29:14.443255  248270 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 05:29:14.443321  248270 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:14.443333  248270 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.443375  248270 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.443403  248270 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.443402  248270 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 05:29:14.443464  248270 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.443462  248270 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.443481  248270 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.444803  248270 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.444813  248270 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.444829  248270 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:14.444808  248270 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.444831  248270 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.444830  248270 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.444830  248270 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 05:29:14.444820  248270 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.586669  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.587332  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.588817  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.594403  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.600393  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.610504  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 05:29:14.612226  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.715068  248270 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 05:29:14.715118  248270 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.715173  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.779435  248270 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 05:29:14.779487  248270 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.779435  248270 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 05:29:14.779538  248270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.779539  248270 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 05:29:14.779571  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.779582  248270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.779618  248270 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 05:29:14.779660  248270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.779708  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.779715  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.779550  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.787172  248270 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 05:29:14.787221  248270 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 05:29:14.787237  248270 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 05:29:14.787275  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.787288  248270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.787332  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.787336  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:14.791460  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.791483  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.791502  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.791543  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.866781  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:14.866836  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.866853  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.883432  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.892951  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.893016  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.912927  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.976361  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:14.984059  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:15.003284  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:15.029014  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:15.048649  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:15.048661  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:15.048727  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:15.116050  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:29:15.143539  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:15.143601  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1210 05:29:15.143736  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:15.173190  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3
	I1210 05:29:15.173206  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3
	I1210 05:29:15.173333  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:15.173334  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:15.188713  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1210 05:29:15.188872  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:15.194426  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3
	I1210 05:29:15.194565  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:15.218546  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1210 05:29:15.218551  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 05:29:15.218634  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 05:29:15.218700  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:15.238721  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3
	I1210 05:29:15.238751  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 05:29:15.238779  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 05:29:15.238854  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 05:29:15.238898  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 05:29:15.238941  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 05:29:15.238975  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 05:29:15.238991  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 05:29:15.238858  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:15.239079  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 05:29:15.244693  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 05:29:15.244738  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 05:29:15.336790  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 05:29:15.336841  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 05:29:15.374498  248270 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:15.374589  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1210 05:29:15.442120  248270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:15.859327  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1210 05:29:15.859390  248270 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:15.859423  248270 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 05:29:15.859450  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 05:29:15.859471  248270 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:15.859541  248270 ssh_runner.go:195] Run: which crictl
	I1210 05:29:17.763703  248270 ssh_runner.go:235] Completed: which crictl: (1.904127102s)
	I1210 05:29:17.763747  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3: (1.904260399s)
	I1210 05:29:17.763776  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 from cache
	I1210 05:29:17.763799  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:17.763815  248270 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:17.763860  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 05:29:17.801244  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:19.427418  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3: (1.663530033s)
	I1210 05:29:19.427461  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 from cache
	I1210 05:29:19.427466  248270 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.626181448s)
	I1210 05:29:19.427490  248270 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:19.427547  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1210 05:29:19.427548  248270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:21.515976  248270 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.088322228s)
	I1210 05:29:21.516048  248270 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 05:29:21.515979  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.088327761s)
	I1210 05:29:21.516139  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:21.516152  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1210 05:29:21.516199  248270 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:21.516255  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:29:21.521779  248270 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 05:29:21.521829  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 05:29:23.716371  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (2.200087663s)
	I1210 05:29:23.716404  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1210 05:29:23.716440  248270 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:23.716492  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 05:29:25.782777  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3: (2.066253017s)
	I1210 05:29:25.782824  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 from cache
	I1210 05:29:25.782859  248270 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:25.782943  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 05:29:27.253133  248270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3: (1.47015767s)
	I1210 05:29:27.253186  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 from cache
	I1210 05:29:27.253222  248270 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:27.253296  248270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:29:27.900792  248270 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 05:29:27.900866  248270 cache_images.go:125] Successfully loaded all cached images
	I1210 05:29:27.900893  248270 cache_images.go:94] duration metric: took 13.457620664s to LoadCachedImages
	I1210 05:29:27.900927  248270 kubeadm.go:935] updating node { 192.168.50.227 8443 v1.34.3 crio true true} ...
	I1210 05:29:27.901107  248270 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-819501 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:29:27.901270  248270 ssh_runner.go:195] Run: crio config
	I1210 05:29:27.952088  248270 cni.go:84] Creating CNI manager for ""
	I1210 05:29:27.952115  248270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:29:27.952136  248270 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:29:27.952158  248270 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.227 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-819501 NodeName:addons-819501 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:29:27.952294  248270 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-819501"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.227"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.227"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:29:27.952375  248270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 05:29:27.965806  248270 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 05:29:27.965903  248270 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 05:29:27.978340  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 05:29:27.978340  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 05:29:27.978345  248270 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 05:29:27.978458  248270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:29:27.978469  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 05:29:27.978552  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 05:29:27.998011  248270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 05:29:27.998043  248270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 05:29:27.998018  248270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 05:29:27.998067  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 05:29:27.998069  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 05:29:28.014708  248270 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 05:29:28.014787  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 05:29:28.819394  248270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:29:28.832094  248270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1210 05:29:28.854035  248270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 05:29:28.875757  248270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1210 05:29:28.897490  248270 ssh_runner.go:195] Run: grep 192.168.50.227	control-plane.minikube.internal$ /etc/hosts
	I1210 05:29:28.902042  248270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:29:28.918543  248270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:29.065436  248270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:29:29.106997  248270 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501 for IP: 192.168.50.227
	I1210 05:29:29.107026  248270 certs.go:195] generating shared ca certs ...
	I1210 05:29:29.107047  248270 certs.go:227] acquiring lock for ca certs: {Name:mk2c8c8bbc628186be8cfd9c613269482a34a061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.107244  248270 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key
	I1210 05:29:29.260185  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt ...
	I1210 05:29:29.260226  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt: {Name:mk7e3ea493469b63ffe73a3fd5c0aebe67cc96c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.260418  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key ...
	I1210 05:29:29.260430  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key: {Name:mk18d206c401766c525db7646d9b50127ae5a4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.260509  248270 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key
	I1210 05:29:29.303788  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt ...
	I1210 05:29:29.303818  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt: {Name:mk93a704a340d2989dfaa2c6ae18dd0ded5b740c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.304005  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key ...
	I1210 05:29:29.304017  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key: {Name:mk52708f456900179c4e21317e6ee01f1f662a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.304092  248270 certs.go:257] generating profile certs ...
	I1210 05:29:29.304158  248270 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.key
	I1210 05:29:29.304173  248270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt with IP's: []
	I1210 05:29:29.373028  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt ...
	I1210 05:29:29.373060  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: {Name:mkacd1d17bdb9699db5acb0deccedf4b963e9627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.373247  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.key ...
	I1210 05:29:29.373259  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.key: {Name:mkc8bb793a8aba8601b09fb6b4c6b561546e1716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.373344  248270 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key.b2effa21
	I1210 05:29:29.373366  248270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt.b2effa21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.227]
	I1210 05:29:29.412783  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt.b2effa21 ...
	I1210 05:29:29.412819  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt.b2effa21: {Name:mkbecca211b22f296a63bf12c0f8d6348e074d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.413027  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key.b2effa21 ...
	I1210 05:29:29.413042  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key.b2effa21: {Name:mkd21c7f030b36e5b0f136cec809fcc4792c4753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.413124  248270 certs.go:382] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt.b2effa21 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt
	I1210 05:29:29.413195  248270 certs.go:386] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key.b2effa21 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key
	I1210 05:29:29.413246  248270 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.key
	I1210 05:29:29.413264  248270 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.crt with IP's: []
	I1210 05:29:29.588512  248270 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.crt ...
	I1210 05:29:29.588545  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.crt: {Name:mkbfd2473bf6ad2df18575d3c1713540ff713d7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.588726  248270 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.key ...
	I1210 05:29:29.588740  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.key: {Name:mk5d55c2451593ca28ccc38ada487efa06a43ea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:29.588942  248270 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:29:29.588986  248270 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem (1082 bytes)
	I1210 05:29:29.589013  248270 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:29:29.589037  248270 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem (1675 bytes)
	I1210 05:29:29.589603  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:29:29.623229  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:29:29.655622  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:29:29.688080  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 05:29:29.719938  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 05:29:29.752646  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 05:29:29.787596  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:29:29.822559  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 05:29:29.861697  248270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:29:29.893727  248270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:29:29.916020  248270 ssh_runner.go:195] Run: openssl version
	I1210 05:29:29.923087  248270 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:29.937161  248270 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:29:29.950819  248270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:29.956621  248270 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:29.956683  248270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:29:29.964340  248270 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:29:29.977116  248270 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 05:29:29.989798  248270 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:29:29.994829  248270 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 05:29:29.994916  248270 kubeadm.go:401] StartCluster: {Name:addons-819501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 C
lusterName:addons-819501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.227 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:29:29.995012  248270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:29:29.995077  248270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:29:30.033996  248270 cri.go:89] found id: ""
	I1210 05:29:30.034077  248270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:29:30.047749  248270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:29:30.061245  248270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:29:30.075038  248270 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:29:30.075063  248270 kubeadm.go:158] found existing configuration files:
	
	I1210 05:29:30.075128  248270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 05:29:30.087377  248270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:29:30.087446  248270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:29:30.100015  248270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 05:29:30.112415  248270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:29:30.112501  248270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:29:30.125599  248270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 05:29:30.137858  248270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:29:30.137955  248270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:29:30.150895  248270 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 05:29:30.162780  248270 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:29:30.162853  248270 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:29:30.175318  248270 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 05:29:30.340910  248270 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 05:29:42.405330  248270 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 05:29:42.405415  248270 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:29:42.405523  248270 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:29:42.405657  248270 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:29:42.405780  248270 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:29:42.405921  248270 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:29:42.407664  248270 out.go:252]   - Generating certificates and keys ...
	I1210 05:29:42.407781  248270 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:29:42.407857  248270 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:29:42.407979  248270 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 05:29:42.408061  248270 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 05:29:42.408157  248270 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 05:29:42.408230  248270 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 05:29:42.408313  248270 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 05:29:42.408455  248270 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-819501 localhost] and IPs [192.168.50.227 127.0.0.1 ::1]
	I1210 05:29:42.408531  248270 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 05:29:42.408656  248270 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-819501 localhost] and IPs [192.168.50.227 127.0.0.1 ::1]
	I1210 05:29:42.408732  248270 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 05:29:42.408789  248270 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 05:29:42.408829  248270 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 05:29:42.408896  248270 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:29:42.408941  248270 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:29:42.409022  248270 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:29:42.409106  248270 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:29:42.409190  248270 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:29:42.409293  248270 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:29:42.409407  248270 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:29:42.409507  248270 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:29:42.411069  248270 out.go:252]   - Booting up control plane ...
	I1210 05:29:42.411187  248270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:29:42.411282  248270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:29:42.411391  248270 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:29:42.411501  248270 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:29:42.411592  248270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:29:42.411684  248270 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:29:42.411786  248270 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:29:42.411843  248270 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:29:42.412015  248270 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:29:42.412159  248270 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 05:29:42.412277  248270 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002203089s
	I1210 05:29:42.412416  248270 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 05:29:42.412493  248270 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.50.227:8443/livez
	I1210 05:29:42.412581  248270 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 05:29:42.412660  248270 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 05:29:42.412738  248270 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.103693537s
	I1210 05:29:42.412795  248270 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.793585918s
	I1210 05:29:42.412851  248270 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001380192s
	I1210 05:29:42.412969  248270 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 05:29:42.413101  248270 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 05:29:42.413187  248270 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 05:29:42.413353  248270 kubeadm.go:319] [mark-control-plane] Marking the node addons-819501 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 05:29:42.413409  248270 kubeadm.go:319] [bootstrap-token] Using token: ifaxfb.g6s3du0ko87s83xe
	I1210 05:29:42.415656  248270 out.go:252]   - Configuring RBAC rules ...
	I1210 05:29:42.415753  248270 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 05:29:42.415838  248270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 05:29:42.415978  248270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 05:29:42.416146  248270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 05:29:42.416292  248270 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 05:29:42.416410  248270 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 05:29:42.416562  248270 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 05:29:42.416613  248270 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 05:29:42.416653  248270 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 05:29:42.416659  248270 kubeadm.go:319] 
	I1210 05:29:42.416721  248270 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 05:29:42.416730  248270 kubeadm.go:319] 
	I1210 05:29:42.416794  248270 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 05:29:42.416799  248270 kubeadm.go:319] 
	I1210 05:29:42.416820  248270 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 05:29:42.416871  248270 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 05:29:42.416929  248270 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 05:29:42.416935  248270 kubeadm.go:319] 
	I1210 05:29:42.416983  248270 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 05:29:42.416988  248270 kubeadm.go:319] 
	I1210 05:29:42.417031  248270 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 05:29:42.417039  248270 kubeadm.go:319] 
	I1210 05:29:42.417087  248270 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 05:29:42.417155  248270 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 05:29:42.417214  248270 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 05:29:42.417227  248270 kubeadm.go:319] 
	I1210 05:29:42.417300  248270 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 05:29:42.417370  248270 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 05:29:42.417376  248270 kubeadm.go:319] 
	I1210 05:29:42.417457  248270 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ifaxfb.g6s3du0ko87s83xe \
	I1210 05:29:42.417548  248270 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:3fdb6ac7c26d35942b0591aa9d148ccb3b5b969098be653ce61486bf954c5d36 \
	I1210 05:29:42.417568  248270 kubeadm.go:319] 	--control-plane 
	I1210 05:29:42.417576  248270 kubeadm.go:319] 
	I1210 05:29:42.417649  248270 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 05:29:42.417655  248270 kubeadm.go:319] 
	I1210 05:29:42.417725  248270 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ifaxfb.g6s3du0ko87s83xe \
	I1210 05:29:42.417846  248270 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:3fdb6ac7c26d35942b0591aa9d148ccb3b5b969098be653ce61486bf954c5d36 
	I1210 05:29:42.417859  248270 cni.go:84] Creating CNI manager for ""
	I1210 05:29:42.417870  248270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:29:42.419498  248270 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 05:29:42.420865  248270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 05:29:42.435368  248270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 05:29:42.463419  248270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 05:29:42.463507  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:42.463555  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-819501 minikube.k8s.io/updated_at=2025_12_10T05_29_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=addons-819501 minikube.k8s.io/primary=true
	I1210 05:29:42.517462  248270 ops.go:34] apiserver oom_adj: -16
	I1210 05:29:42.645586  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:43.145896  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:43.646263  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:44.146506  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:44.646447  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:45.146404  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:45.646503  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:46.146345  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:46.645679  248270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:29:46.765786  248270 kubeadm.go:1114] duration metric: took 4.302351478s to wait for elevateKubeSystemPrivileges
	I1210 05:29:46.765837  248270 kubeadm.go:403] duration metric: took 16.770933871s to StartCluster
	I1210 05:29:46.765872  248270 settings.go:142] acquiring lock: {Name:mkfd19ecbf4d1e6319f3bb5fd2369931dc469304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:46.766077  248270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 05:29:46.766575  248270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/kubeconfig: {Name:mk89e62df614d075d4d9ba9b9215d18e6c14ff0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:46.766803  248270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 05:29:46.766812  248270 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.227 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:29:46.766895  248270 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 05:29:46.767036  248270 config.go:182] Loaded profile config "addons-819501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:46.767055  248270 addons.go:70] Setting yakd=true in profile "addons-819501"
	I1210 05:29:46.767080  248270 addons.go:70] Setting cloud-spanner=true in profile "addons-819501"
	I1210 05:29:46.767080  248270 addons.go:70] Setting default-storageclass=true in profile "addons-819501"
	I1210 05:29:46.767094  248270 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-819501"
	I1210 05:29:46.767101  248270 addons.go:239] Setting addon cloud-spanner=true in "addons-819501"
	I1210 05:29:46.767102  248270 addons.go:239] Setting addon yakd=true in "addons-819501"
	I1210 05:29:46.767110  248270 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-819501"
	I1210 05:29:46.767110  248270 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-819501"
	I1210 05:29:46.767136  248270 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-819501"
	I1210 05:29:46.767140  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767144  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767144  248270 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-819501"
	I1210 05:29:46.767153  248270 addons.go:70] Setting gcp-auth=true in profile "addons-819501"
	I1210 05:29:46.767165  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767173  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767110  248270 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-819501"
	I1210 05:29:46.767198  248270 addons.go:70] Setting inspektor-gadget=true in profile "addons-819501"
	I1210 05:29:46.767209  248270 addons.go:239] Setting addon inspektor-gadget=true in "addons-819501"
	I1210 05:29:46.767208  248270 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-819501"
	I1210 05:29:46.767236  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767251  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.768008  248270 addons.go:70] Setting metrics-server=true in profile "addons-819501"
	I1210 05:29:46.768032  248270 addons.go:239] Setting addon metrics-server=true in "addons-819501"
	I1210 05:29:46.768064  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767182  248270 addons.go:70] Setting ingress=true in profile "addons-819501"
	I1210 05:29:46.768104  248270 addons.go:239] Setting addon ingress=true in "addons-819501"
	I1210 05:29:46.768148  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767190  248270 addons.go:70] Setting ingress-dns=true in profile "addons-819501"
	I1210 05:29:46.768193  248270 addons.go:239] Setting addon ingress-dns=true in "addons-819501"
	I1210 05:29:46.768232  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.768313  248270 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-819501"
	I1210 05:29:46.768289  248270 addons.go:70] Setting storage-provisioner=true in profile "addons-819501"
	I1210 05:29:46.768342  248270 addons.go:70] Setting volcano=true in profile "addons-819501"
	I1210 05:29:46.768349  248270 addons.go:239] Setting addon storage-provisioner=true in "addons-819501"
	I1210 05:29:46.768354  248270 addons.go:239] Setting addon volcano=true in "addons-819501"
	I1210 05:29:46.768375  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.768380  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.767174  248270 mustload.go:66] Loading cluster: addons-819501
	I1210 05:29:46.768847  248270 addons.go:70] Setting registry=true in profile "addons-819501"
	I1210 05:29:46.768871  248270 addons.go:239] Setting addon registry=true in "addons-819501"
	I1210 05:29:46.768917  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.769020  248270 config.go:182] Loaded profile config "addons-819501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:29:46.769085  248270 addons.go:70] Setting registry-creds=true in profile "addons-819501"
	I1210 05:29:46.769091  248270 out.go:179] * Verifying Kubernetes components...
	I1210 05:29:46.769102  248270 addons.go:239] Setting addon registry-creds=true in "addons-819501"
	I1210 05:29:46.769133  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.769444  248270 addons.go:70] Setting volumesnapshots=true in profile "addons-819501"
	I1210 05:29:46.769469  248270 addons.go:239] Setting addon volumesnapshots=true in "addons-819501"
	I1210 05:29:46.769500  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.768333  248270 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-819501"
	I1210 05:29:46.771211  248270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:29:46.775133  248270 addons.go:239] Setting addon default-storageclass=true in "addons-819501"
	I1210 05:29:46.775185  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.775483  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 05:29:46.775493  248270 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 05:29:46.775702  248270 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 05:29:46.775491  248270 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 05:29:46.776909  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 05:29:46.776932  248270 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 05:29:46.776946  248270 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	W1210 05:29:46.777456  248270 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 05:29:46.777012  248270 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:29:46.777568  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 05:29:46.777745  248270 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 05:29:46.777753  248270 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 05:29:46.777795  248270 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:29:46.777807  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 05:29:46.778720  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 05:29:46.778753  248270 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 05:29:46.779205  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 05:29:46.778868  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.779621  248270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:29:46.779626  248270 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 05:29:46.779644  248270 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 05:29:46.779699  248270 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:29:46.779717  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 05:29:46.779802  248270 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 05:29:46.779812  248270 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:46.781021  248270 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-819501"
	I1210 05:29:46.781066  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:46.781599  248270 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:29:46.781619  248270 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:29:46.781927  248270 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 05:29:46.781979  248270 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 05:29:46.782811  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 05:29:46.782848  248270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:29:46.783287  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:29:46.782853  248270 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:29:46.783372  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 05:29:46.783764  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 05:29:46.783783  248270 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:29:46.784144  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 05:29:46.784493  248270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 05:29:46.784507  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 05:29:46.784990  248270 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 05:29:46.785462  248270 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 05:29:46.787159  248270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:29:46.787165  248270 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 05:29:46.787232  248270 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 05:29:46.787240  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 05:29:46.787412  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 05:29:46.787542  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.787581  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.787826  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.788403  248270 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:29:46.788672  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 05:29:46.789422  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.789691  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.789726  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.789850  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.789870  248270 out.go:179]   - Using image docker.io/busybox:stable
	I1210 05:29:46.789904  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.789986  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.790377  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.790436  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.790604  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.790708  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.790929  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 05:29:46.791336  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.791379  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.791545  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.791579  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.791918  248270 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:29:46.791944  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 05:29:46.792221  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.792333  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.792335  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.792373  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.792580  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.792613  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.793171  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.793390  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.793737  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 05:29:46.793814  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.793846  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.794407  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.794632  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.795534  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.795729  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.795767  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.795997  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.796033  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.796253  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.796350  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.796260  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 05:29:46.796383  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.796916  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.796960  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.796989  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.797284  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.797288  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.797322  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.797369  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.797592  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.798121  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.798164  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.798338  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.798405  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.798805  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.798850  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.798934  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.798992  248270 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 05:29:46.799112  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.799331  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.799362  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.799584  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:46.800274  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 05:29:46.800293  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 05:29:46.802692  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.803162  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:46.803185  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:46.803366  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	W1210 05:29:47.001763  248270 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.50.1:60794->192.168.50.227:22: read: connection reset by peer
	I1210 05:29:47.001799  248270 retry.go:31] will retry after 305.783852ms: ssh: handshake failed: read tcp 192.168.50.1:60794->192.168.50.227:22: read: connection reset by peer
	W1210 05:29:47.014988  248270 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.50.1:60818->192.168.50.227:22: read: connection reset by peer
	I1210 05:29:47.015023  248270 retry.go:31] will retry after 221.795568ms: ssh: handshake failed: read tcp 192.168.50.1:60818->192.168.50.227:22: read: connection reset by peer
	I1210 05:29:47.174748  248270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:29:47.174750  248270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 05:29:47.407045  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:29:47.432282  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 05:29:47.432309  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 05:29:47.482855  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:29:47.501562  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:29:47.503279  248270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 05:29:47.503299  248270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 05:29:47.509563  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:29:47.515555  248270 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 05:29:47.515582  248270 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 05:29:47.525606  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:29:47.562132  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 05:29:47.566586  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:29:47.617239  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 05:29:47.617273  248270 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 05:29:47.645948  248270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 05:29:47.645980  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 05:29:47.758438  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:29:47.910234  248270 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:29:47.910257  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 05:29:47.920337  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 05:29:47.920367  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 05:29:48.015067  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:29:48.027823  248270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 05:29:48.027852  248270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 05:29:48.067181  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 05:29:48.067220  248270 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 05:29:48.292618  248270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 05:29:48.292654  248270 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 05:29:48.352705  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:29:48.609223  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 05:29:48.609250  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 05:29:48.685554  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:29:48.755069  248270 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 05:29:48.755098  248270 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 05:29:48.842106  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 05:29:48.842167  248270 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 05:29:48.875769  248270 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:29:48.875798  248270 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 05:29:49.413506  248270 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:29:49.413534  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 05:29:49.466897  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 05:29:49.466930  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 05:29:49.622705  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 05:29:49.622739  248270 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 05:29:49.796096  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:29:50.219307  248270 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 05:29:50.219336  248270 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 05:29:50.219351  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:29:50.321293  248270 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:50.321319  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 05:29:50.537459  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 05:29:50.537499  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 05:29:50.716098  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:51.041250  248270 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.866447078s)
	I1210 05:29:51.041309  248270 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.866465642s)
	I1210 05:29:51.041340  248270 start.go:977] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1210 05:29:51.042044  248270 node_ready.go:35] waiting up to 6m0s for node "addons-819501" to be "Ready" ...
	I1210 05:29:51.049135  248270 node_ready.go:49] node "addons-819501" is "Ready"
	I1210 05:29:51.049170  248270 node_ready.go:38] duration metric: took 7.101622ms for node "addons-819501" to be "Ready" ...
	I1210 05:29:51.049187  248270 api_server.go:52] waiting for apiserver process to appear ...
	I1210 05:29:51.049251  248270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:29:51.068361  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 05:29:51.068386  248270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 05:29:51.554068  248270 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-819501" context rescaled to 1 replicas
	I1210 05:29:51.613448  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 05:29:51.613477  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 05:29:52.107019  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 05:29:52.107058  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 05:29:52.549779  248270 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:29:52.549811  248270 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 05:29:53.265677  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:29:54.221671  248270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 05:29:54.225457  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:54.225987  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:54.226021  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:54.226211  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:55.099859  248270 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 05:29:55.502306  248270 addons.go:239] Setting addon gcp-auth=true in "addons-819501"
	I1210 05:29:55.502381  248270 host.go:66] Checking if "addons-819501" exists ...
	I1210 05:29:55.504619  248270 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 05:29:55.507749  248270 main.go:143] libmachine: domain addons-819501 has defined MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:55.508396  248270 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:26:32", ip: ""} in network mk-addons-819501: {Iface:virbr2 ExpiryTime:2025-12-10 06:29:08 +0000 UTC Type:0 Mac:52:54:00:0b:26:32 Iaid: IPaddr:192.168.50.227 Prefix:24 Hostname:addons-819501 Clientid:01:52:54:00:0b:26:32}
	I1210 05:29:55.508441  248270 main.go:143] libmachine: domain addons-819501 has defined IP address 192.168.50.227 and MAC address 52:54:00:0b:26:32 in network mk-addons-819501
	I1210 05:29:55.508753  248270 sshutil.go:53] new ssh client: &{IP:192.168.50.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/addons-819501/id_rsa Username:docker}
	I1210 05:29:55.727192  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.320101368s)
	I1210 05:29:55.727260  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.244355162s)
	I1210 05:29:55.727286  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.217695624s)
	I1210 05:29:55.727349  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.225763647s)
	I1210 05:29:55.727464  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.201812774s)
	I1210 05:29:55.727514  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.165347037s)
	I1210 05:29:55.727599  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.160988535s)
	I1210 05:29:55.727655  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.969192004s)
	W1210 05:29:55.895228  248270 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1210 05:29:58.186837  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.834088583s)
	I1210 05:29:58.186914  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.501321346s)
	I1210 05:29:58.186956  248270 addons.go:495] Verifying addon registry=true in "addons-819501"
	I1210 05:29:58.187022  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.39087132s)
	I1210 05:29:58.187082  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.96769279s)
	I1210 05:29:58.187047  248270 addons.go:495] Verifying addon metrics-server=true in "addons-819501"
	I1210 05:29:58.187133  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.172020307s)
	I1210 05:29:58.187195  248270 addons.go:495] Verifying addon ingress=true in "addons-819501"
	I1210 05:29:58.188701  248270 out.go:179] * Verifying registry addon...
	I1210 05:29:58.188716  248270 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-819501 service yakd-dashboard -n yakd-dashboard
	
	I1210 05:29:58.189735  248270 out.go:179] * Verifying ingress addon...
	I1210 05:29:58.191374  248270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 05:29:58.192560  248270 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 05:29:58.348103  248270 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 05:29:58.348137  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:58.367929  248270 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 05:29:58.367966  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:58.542091  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.825930838s)
	I1210 05:29:58.542168  248270 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.492891801s)
	W1210 05:29:58.542182  248270 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:29:58.542204  248270 api_server.go:72] duration metric: took 11.775367493s to wait for apiserver process to appear ...
	I1210 05:29:58.542216  248270 api_server.go:88] waiting for apiserver healthz status ...
	I1210 05:29:58.542242  248270 api_server.go:253] Checking apiserver healthz at https://192.168.50.227:8443/healthz ...
	I1210 05:29:58.542243  248270 retry.go:31] will retry after 174.698732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:29:58.565722  248270 api_server.go:279] https://192.168.50.227:8443/healthz returned 200:
	ok
	I1210 05:29:58.585143  248270 api_server.go:141] control plane version: v1.34.3
	I1210 05:29:58.585187  248270 api_server.go:131] duration metric: took 42.962592ms to wait for apiserver health ...
	I1210 05:29:58.585201  248270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 05:29:58.669908  248270 system_pods.go:59] 16 kube-system pods found
	I1210 05:29:58.669957  248270 system_pods.go:61] "amd-gpu-device-plugin-xwmk6" [ca338a7b-5d2c-4894-a615-0224cddd49ff] Running
	I1210 05:29:58.669977  248270 system_pods.go:61] "coredns-66bc5c9577-h4zx9" [1a4ca1fc-ccd8-40e7-86e6-ec486935adac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:58.669984  248270 system_pods.go:61] "coredns-66bc5c9577-lwtl7" [d84b8912-1587-45b3-956c-791ea7ec71c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:58.669992  248270 system_pods.go:61] "etcd-addons-819501" [82bf46a7-17c9-462a-96f2-ff9578d2f44b] Running
	I1210 05:29:58.670000  248270 system_pods.go:61] "kube-apiserver-addons-819501" [0925d6e7-492c-4cb7-947a-1540b753d464] Running
	I1210 05:29:58.670006  248270 system_pods.go:61] "kube-controller-manager-addons-819501" [7f7275d9-f455-4620-9240-435ef4487f90] Running
	I1210 05:29:58.670017  248270 system_pods.go:61] "kube-ingress-dns-minikube" [056ca6ed-0cab-42f7-bffb-24f0785fd003] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:58.670027  248270 system_pods.go:61] "kube-proxy-ngpzv" [75c58eba-0463-42c9-a9d6-3c579349bd49] Running
	I1210 05:29:58.670033  248270 system_pods.go:61] "kube-scheduler-addons-819501" [8c2cc920-2a69-4923-8222-c7affed57f02] Running
	I1210 05:29:58.670041  248270 system_pods.go:61] "metrics-server-85b7d694d7-bqdmn" [6439b312-6541-4ed0-94d7-900f65d427bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:58.670051  248270 system_pods.go:61] "nvidia-device-plugin-daemonset-dztkj" [1272b6dc-2104-4d64-9673-e03010d430b4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:58.670060  248270 system_pods.go:61] "registry-6b586f9694-lkhvn" [0a8387d7-19c7-49cd-8425-48c60f2e70ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:58.670076  248270 system_pods.go:61] "registry-creds-764b6fb674-fw65b" [21819916-d847-4fcb-8cd9-d14d7cb387fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:58.670084  248270 system_pods.go:61] "registry-proxy-25pr7" [945db864-8d9f-4e37-b866-28b9f77d42c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:58.670094  248270 system_pods.go:61] "snapshot-controller-7d9fbc56b8-m92vv" [d92b01be-d84c-4f63-88c1-ed58fc9236a3] Pending
	I1210 05:29:58.670101  248270 system_pods.go:61] "storage-provisioner" [76ed88a2-563b-4ee6-9a9a-94669a45bd2a] Running
	I1210 05:29:58.670110  248270 system_pods.go:74] duration metric: took 84.901558ms to wait for pod list to return data ...
	I1210 05:29:58.670120  248270 default_sa.go:34] waiting for default service account to be created ...
	I1210 05:29:58.717796  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:29:58.755265  248270 default_sa.go:45] found service account: "default"
	I1210 05:29:58.755306  248270 default_sa.go:55] duration metric: took 85.176789ms for default service account to be created ...
	I1210 05:29:58.755322  248270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 05:29:58.837383  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:58.837387  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:58.837931  248270 system_pods.go:86] 17 kube-system pods found
	I1210 05:29:58.837967  248270 system_pods.go:89] "amd-gpu-device-plugin-xwmk6" [ca338a7b-5d2c-4894-a615-0224cddd49ff] Running
	I1210 05:29:58.837983  248270 system_pods.go:89] "coredns-66bc5c9577-h4zx9" [1a4ca1fc-ccd8-40e7-86e6-ec486935adac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:58.838000  248270 system_pods.go:89] "coredns-66bc5c9577-lwtl7" [d84b8912-1587-45b3-956c-791ea7ec71c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:29:58.838007  248270 system_pods.go:89] "etcd-addons-819501" [82bf46a7-17c9-462a-96f2-ff9578d2f44b] Running
	I1210 05:29:58.838018  248270 system_pods.go:89] "kube-apiserver-addons-819501" [0925d6e7-492c-4cb7-947a-1540b753d464] Running
	I1210 05:29:58.838025  248270 system_pods.go:89] "kube-controller-manager-addons-819501" [7f7275d9-f455-4620-9240-435ef4487f90] Running
	I1210 05:29:58.838036  248270 system_pods.go:89] "kube-ingress-dns-minikube" [056ca6ed-0cab-42f7-bffb-24f0785fd003] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:29:58.838043  248270 system_pods.go:89] "kube-proxy-ngpzv" [75c58eba-0463-42c9-a9d6-3c579349bd49] Running
	I1210 05:29:58.838049  248270 system_pods.go:89] "kube-scheduler-addons-819501" [8c2cc920-2a69-4923-8222-c7affed57f02] Running
	I1210 05:29:58.838060  248270 system_pods.go:89] "metrics-server-85b7d694d7-bqdmn" [6439b312-6541-4ed0-94d7-900f65d427bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:29:58.838075  248270 system_pods.go:89] "nvidia-device-plugin-daemonset-dztkj" [1272b6dc-2104-4d64-9673-e03010d430b4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:29:58.838101  248270 system_pods.go:89] "registry-6b586f9694-lkhvn" [0a8387d7-19c7-49cd-8425-48c60f2e70ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:29:58.838115  248270 system_pods.go:89] "registry-creds-764b6fb674-fw65b" [21819916-d847-4fcb-8cd9-d14d7cb387fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:29:58.838123  248270 system_pods.go:89] "registry-proxy-25pr7" [945db864-8d9f-4e37-b866-28b9f77d42c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:29:58.838130  248270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m92vv" [d92b01be-d84c-4f63-88c1-ed58fc9236a3] Pending
	I1210 05:29:58.838137  248270 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xhmx9" [f17bb4a5-df22-4e1c-a6dd-a37a43712cbb] Pending
	I1210 05:29:58.838143  248270 system_pods.go:89] "storage-provisioner" [76ed88a2-563b-4ee6-9a9a-94669a45bd2a] Running
	I1210 05:29:58.838154  248270 system_pods.go:126] duration metric: took 82.823961ms to wait for k8s-apps to be running ...
	I1210 05:29:58.838177  248270 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 05:29:58.838240  248270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:29:59.216996  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:29:59.217048  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:59.760212  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:29:59.799267  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:00.028379  248270 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.5237158s)
	I1210 05:30:00.030605  248270 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:30:00.031844  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.766107934s)
	I1210 05:30:00.031919  248270 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-819501"
	I1210 05:30:00.033501  248270 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 05:30:00.033501  248270 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 05:30:00.035389  248270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 05:30:00.035424  248270 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 05:30:00.036495  248270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 05:30:00.092418  248270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 05:30:00.092524  248270 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 05:30:00.099191  248270 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 05:30:00.099218  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:00.154497  248270 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:30:00.154523  248270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 05:30:00.218466  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:00.218476  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:00.239458  248270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:30:00.551381  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:00.700051  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:00.700489  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:01.046588  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:01.060958  248270 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.222683048s)
	I1210 05:30:01.060998  248270 system_svc.go:56] duration metric: took 2.222816606s WaitForService to wait for kubelet
	I1210 05:30:01.061010  248270 kubeadm.go:587] duration metric: took 14.294174339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:30:01.061035  248270 node_conditions.go:102] verifying NodePressure condition ...
	I1210 05:30:01.060959  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.343106707s)
	I1210 05:30:01.067487  248270 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 05:30:01.067520  248270 node_conditions.go:123] node cpu capacity is 2
	I1210 05:30:01.067536  248270 node_conditions.go:105] duration metric: took 6.493768ms to run NodePressure ...
	I1210 05:30:01.067549  248270 start.go:242] waiting for startup goroutines ...
	I1210 05:30:01.200588  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:01.203833  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:01.575049  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:01.783678  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:01.820709  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:01.901619  248270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.662111316s)
	I1210 05:30:01.903054  248270 addons.go:495] Verifying addon gcp-auth=true in "addons-819501"
	I1210 05:30:01.905447  248270 out.go:179] * Verifying gcp-auth addon...
	I1210 05:30:01.908030  248270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 05:30:01.971590  248270 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 05:30:01.971620  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:02.099231  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:02.211381  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:02.211475  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:02.423901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:02.544501  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:02.700413  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:02.702741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:02.917043  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:03.043724  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:03.195997  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:03.200750  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:03.422696  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:03.542204  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:03.699053  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:03.702004  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:03.913811  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:04.042408  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:04.197038  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:04.197194  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:04.414256  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:04.544503  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:04.696289  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:04.699139  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:04.912192  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:05.043926  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:05.197317  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:05.198154  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:05.413841  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:05.542630  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:05.836234  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:05.837463  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:05.912581  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:06.041785  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:06.197071  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:06.198021  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:06.412100  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:06.541405  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:06.696562  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:06.697300  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:06.912034  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:07.040758  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:07.195563  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:07.196426  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:07.414799  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:07.541759  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:07.699852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:07.700171  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:07.913279  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:08.042406  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:08.199694  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:08.200267  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:08.413210  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:08.541549  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:08.694572  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:08.700820  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:08.913565  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:09.043130  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:09.199805  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:09.200384  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:09.413468  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:09.738431  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:09.743709  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:09.744006  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:09.913178  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:10.045294  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:10.201112  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:10.201536  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:10.412804  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:10.543961  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:10.700658  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:10.702942  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:10.913710  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:11.042129  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:11.198908  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:11.204061  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:11.412990  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:11.540719  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:11.701614  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:11.702763  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:11.914546  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:12.042555  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:12.197852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:12.198653  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:12.417360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:12.542814  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:12.697802  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:12.699425  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:12.913723  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:13.040006  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:13.195864  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:13.199933  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:13.418096  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:13.543369  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:13.699360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:13.699489  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:13.912674  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:14.040435  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:14.197368  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:14.198434  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:14.413640  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:14.540394  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:14.696663  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:14.699389  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:14.915541  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:15.429953  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:15.433247  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:15.435508  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:15.435521  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:15.540749  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:15.699596  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:15.700467  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:15.916459  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:16.041580  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:16.195018  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:16.197575  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:16.412219  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:16.546078  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:16.718549  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:16.718656  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:16.912761  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:17.049720  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:17.199564  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:17.199795  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:17.416037  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:17.544532  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:17.699384  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:17.700731  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:17.945756  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:18.041320  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:18.200647  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:18.200899  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:18.413830  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:18.546581  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:18.697003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:18.702274  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:18.912631  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:19.043237  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:19.196644  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:19.197045  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:19.412567  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:19.540612  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:19.695730  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:19.698016  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:19.913692  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:20.045701  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:20.197249  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:20.197641  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:20.413847  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:20.542656  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:20.698818  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:20.704499  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:20.918024  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:21.042453  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:21.197201  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:21.203709  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:21.415612  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:21.547180  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:21.697089  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:21.698183  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:21.913362  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:22.048347  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:22.195852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:22.198596  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:22.414126  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:22.541638  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:22.695242  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:22.698366  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:23.021289  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:23.042294  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:23.198290  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:23.199072  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:23.414047  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:23.554644  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:23.701970  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:23.705112  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:23.913583  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:24.061821  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:24.199309  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:24.203209  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:24.418670  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:24.541305  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:24.696346  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:24.700118  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:24.915306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:25.053528  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:25.196902  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:25.197440  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:25.413854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:25.542391  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:25.701623  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:25.704483  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:25.911945  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:26.046126  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:26.212011  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:26.214603  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:26.412412  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:26.543055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:26.696033  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:26.699135  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:26.911772  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:27.040667  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:27.195375  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:27.197664  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:27.412483  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:27.541678  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:27.700619  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:27.701393  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:27.912791  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:28.040728  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:28.198661  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:28.201110  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:28.416573  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:28.904196  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:28.904398  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:28.904575  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:28.913235  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:29.045841  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:29.200545  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:29.203445  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:29.411788  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:29.542883  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:29.699400  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:29.701573  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:29.913680  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:30.040168  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:30.197412  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:30.202099  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:30.413962  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:30.542075  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:30.697823  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:30.699758  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:30.933743  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:31.041743  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:31.200789  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:31.202078  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:31.411894  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:31.543354  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:31.696082  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:31.697635  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:31.915844  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:32.042712  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:32.195680  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:32.196329  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:32.413623  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:32.541402  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:32.697475  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:32.700593  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:32.914887  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:33.043306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:33.197100  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:33.199780  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:33.412378  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:33.542047  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:33.696129  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:33.696893  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:33.912809  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:34.040753  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:34.195477  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:34.196560  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:34.417130  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:34.543287  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:34.702104  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:34.702245  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:34.912016  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:35.040624  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:35.196109  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:35.196529  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:35.411973  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:35.540615  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:35.697044  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:35.697719  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:35.913698  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:36.040429  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:36.195759  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:36.196067  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:36.413418  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:36.541807  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:36.698339  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:36.698627  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:36.912470  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:37.042035  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:37.195635  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:37.195732  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:37.412529  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:37.541462  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:37.696355  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:37.696358  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:37.911987  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:38.040986  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:38.195127  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:38.197401  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:38.412456  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:38.541008  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:38.696652  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:38.696855  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:38.912382  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:39.040953  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:39.195628  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:39.197046  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:39.411281  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:39.541262  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:39.696395  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:39.696643  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:39.912042  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:40.041169  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:40.195226  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:40.196941  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:40.413324  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:40.540517  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:40.695280  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:40.697540  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:40.912944  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:41.041169  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:41.196610  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:41.197062  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:41.411344  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:41.541121  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:41.697442  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:41.697602  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:41.912587  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:42.040852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:42.195549  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:42.196962  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:42.412892  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:42.540835  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:42.695893  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:42.697068  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:42.912138  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:43.041313  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:43.197278  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:43.197476  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:43.412390  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:43.541382  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:43.695572  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:43.696270  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:43.911719  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:44.040923  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:44.199247  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:44.200272  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:44.412362  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:44.543558  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:44.695243  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:44.696776  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:44.912579  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:45.040311  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:45.196524  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:45.196806  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:45.412967  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:45.541112  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:45.695648  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:45.698135  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:45.911238  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:46.040981  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:46.196195  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:46.197106  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:46.413618  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:46.541179  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:46.700444  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:46.700579  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:46.912054  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:47.040909  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:47.196050  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:47.197636  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:47.412575  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:47.540223  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:47.697846  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:47.698230  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:47.912349  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:48.042055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:48.195871  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:48.198187  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:48.411783  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:48.542256  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:48.695546  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:48.698050  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:48.911939  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:49.041385  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:49.196163  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:49.196353  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:49.412001  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:49.541756  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:49.695783  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:49.696993  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:49.911727  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:50.040528  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:50.196370  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:50.196506  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:50.412758  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:50.542086  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:50.697092  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:50.698043  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:50.912949  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:51.041707  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:51.195044  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:51.196530  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:51.412015  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:51.540676  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:51.695253  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:51.697115  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:51.911838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:52.040571  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:52.195854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:52.197767  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:52.412165  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:52.541296  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:52.695780  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:52.698078  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:52.911868  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:53.040966  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:53.196250  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:53.198224  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:53.412952  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:53.541033  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:53.695318  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:53.697975  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:53.918128  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:54.041725  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:54.196856  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:54.196973  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:54.412680  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:54.541389  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:54.696707  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:54.697417  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:54.911941  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:55.041035  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:55.195704  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:55.197936  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:55.416128  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:55.540974  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:55.696532  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:55.696605  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:55.912032  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:56.041144  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:56.195844  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:56.197459  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:56.412239  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:56.543267  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:56.696985  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:56.697065  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:56.911959  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:57.041069  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:57.196450  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:57.197271  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:57.411484  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:57.543169  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:57.698956  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:57.700761  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:57.912297  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:58.041754  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:58.195094  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:58.198093  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:58.412335  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:58.547820  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:58.697087  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:58.697216  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:58.911898  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:59.041334  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:59.195790  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:59.197287  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:59.412314  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:30:59.541645  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:30:59.695025  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:30:59.696945  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:30:59.913032  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:00.042206  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:00.195905  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:00.196849  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:00.413416  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:00.541940  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:00.695570  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:00.697495  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:00.912083  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:01.040980  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:01.197600  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:01.197750  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:01.413084  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:01.541003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:01.701147  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:01.701307  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:01.912239  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:02.041838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:02.197451  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:02.199381  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:02.411848  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:02.566113  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:02.697162  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:02.697404  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:02.912554  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:03.041300  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:03.197265  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:03.197744  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:03.412283  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:03.541313  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:03.697345  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:03.697362  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:03.912456  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:04.041433  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:04.196053  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:04.196386  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:04.411480  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:04.541396  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:04.697012  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:04.697122  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:04.912021  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:05.040986  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:05.196425  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:05.197055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:05.414392  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:05.543906  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:05.697548  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:05.699128  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:05.914449  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:06.059122  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:06.199035  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:06.199060  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:06.411555  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:06.548145  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:06.706728  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:06.710127  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:06.913080  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:07.045418  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:07.197313  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:07.198838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:07.412442  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:07.550194  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:07.698588  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:07.700102  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:07.912898  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:08.041523  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:08.200048  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:08.201764  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:08.416732  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:08.541668  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:08.696805  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:08.699404  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:08.913919  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:09.044035  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:09.202083  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:09.203196  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:09.424278  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:09.543568  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:09.694390  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:09.696701  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:09.928227  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:10.047635  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:10.202711  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:10.203091  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:10.412177  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:10.547099  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:10.701159  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:10.701427  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:11.010654  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:11.047478  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:11.197452  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:11.197505  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:11.412499  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:11.542966  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:11.699944  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:11.704028  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:11.913615  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:12.040550  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:12.202167  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:12.206422  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:12.414111  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:12.541586  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:12.701242  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:12.701392  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:12.911980  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:13.041255  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:13.196819  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:13.197262  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:13.415365  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:13.543891  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:13.698298  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:13.698483  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:14.024018  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:14.044419  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:14.200005  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:14.200056  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:14.414780  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:14.545383  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:14.698157  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:14.698240  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:14.912312  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:15.047766  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:15.200507  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:15.201630  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:15.414359  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:15.542858  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:15.696360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:15.697064  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:15.914308  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:16.042374  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:16.203003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:16.205670  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:16.415017  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:16.547467  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:16.698087  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:16.704457  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:16.912729  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:17.042682  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:17.197758  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:17.199865  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:17.415143  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:17.543861  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:17.697284  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:17.697374  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:17.912774  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:18.051952  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:18.198347  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:18.198464  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:18.413101  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:18.544220  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:18.697480  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:18.698347  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:18.912174  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:19.041195  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:19.195597  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:19.197754  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:19.411930  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:19.543107  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:19.695716  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:19.696506  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:19.911557  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:20.040046  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:20.195570  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:20.197384  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:20.411741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:20.540714  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:20.696789  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:20.696930  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:20.912723  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:21.040277  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:21.196028  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:21.196867  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:21.412845  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:21.540858  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:21.695001  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:21.697856  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:21.913055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:22.041789  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:22.195447  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:22.197793  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:22.412097  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:22.541100  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:22.695437  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:22.697741  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:22.912944  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:23.041760  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:23.195078  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:23.196924  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:23.411992  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:23.540721  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:23.696368  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:23.697952  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:23.924611  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:24.042614  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:24.195700  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:24.197113  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:24.412165  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:24.542539  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:24.696569  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:24.699116  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:24.912589  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:25.040194  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:25.197311  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:25.197820  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:25.412984  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:25.541361  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:25.696760  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:25.699224  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:25.911979  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:26.041540  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:26.195316  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:26.196448  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:26.412932  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:26.540888  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:26.695457  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:26.697418  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:26.912839  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:27.040692  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:27.198285  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:27.198389  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:27.413216  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:27.541054  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:27.695524  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:27.697682  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:27.912984  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:28.041063  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:28.196678  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:28.197402  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:28.412505  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:28.540110  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:28.695717  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:28.697736  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:28.912387  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:29.041413  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:29.194635  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:29.196805  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:29.412044  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:29.542788  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:29.695337  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:29.697270  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:29.911726  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:30.041798  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:30.195904  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:30.197264  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:30.411616  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:30.540508  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:30.697339  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:30.697782  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:30.913547  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:31.040452  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:31.195922  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:31.196535  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:31.411982  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:31.540478  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:31.695278  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:31.698540  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:31.912494  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:32.043856  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:32.197609  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:32.197819  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:32.412431  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:32.541752  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:32.696539  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:32.697403  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:32.912910  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:33.041048  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:33.195704  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:33.197917  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:33.412695  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:33.540734  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:33.695267  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:33.697595  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:33.912951  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:34.040593  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:34.194646  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:34.197266  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:34.411742  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:34.540945  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:34.698161  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:34.698313  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:34.912160  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:35.042016  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:35.195246  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:35.197425  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:35.412035  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:35.541521  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:35.694583  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:35.697617  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:35.911895  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:36.040992  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:36.196447  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:36.197672  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:36.412372  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:36.541192  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:36.697869  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:36.699654  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:36.912908  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:37.040956  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:37.196801  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:37.196942  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:37.411935  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:37.541058  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:37.694918  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:37.696472  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:37.912836  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:38.040660  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:38.194944  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:38.197448  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:38.411791  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:38.541124  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:38.697144  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:38.697809  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:38.912697  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:39.040461  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:39.194656  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:39.196407  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:39.411925  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:39.541913  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:39.695201  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:39.696659  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:39.912467  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:40.040722  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:40.195735  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:40.196428  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:40.412510  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:40.540388  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:40.695082  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:40.696506  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:40.912222  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:41.041847  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:41.197091  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:41.197567  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:41.412898  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:41.540868  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:41.697534  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:41.697902  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:41.912633  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:42.040266  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:42.196150  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:42.198086  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:42.411854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:42.541196  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:42.697014  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:42.698518  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:42.912523  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:43.040145  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:43.195475  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:43.197044  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:43.412242  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:43.540853  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:43.695064  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:43.696905  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:43.912652  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:44.040413  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:44.195534  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:44.196588  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:44.412338  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:44.541409  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:44.696199  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:44.696325  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:44.912351  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:45.041899  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:45.195069  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:45.197614  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:45.411817  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:45.540975  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:45.696734  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:45.696956  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:45.912435  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:46.041440  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:46.194926  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:46.197727  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:46.412648  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:46.540969  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:46.695484  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:46.699909  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:46.914009  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:47.043433  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:47.197574  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:47.197992  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:47.412334  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:47.541535  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:47.694711  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:47.695578  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:47.912973  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:48.040846  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:48.195216  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:48.196738  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:48.412375  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:48.542493  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:48.696138  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:48.696493  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:48.912933  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:49.041324  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:49.195693  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:49.196820  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:49.412830  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:49.542225  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:49.696086  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:49.696301  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:49.911815  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:50.040698  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:50.196781  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:50.196831  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:50.412677  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:50.541012  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:50.695171  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:50.696197  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:50.912665  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:51.040391  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:51.196254  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:51.196414  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:51.411787  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:51.546678  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:51.697612  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:51.697836  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:51.912801  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:52.041759  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:52.196133  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:52.197930  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:52.413220  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:52.541923  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:52.695369  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:52.696907  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:52.913889  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:53.040661  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:53.196476  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:53.196983  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:53.411076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:53.541834  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:53.696458  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:53.696646  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:53.912125  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:54.041959  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:54.195464  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:54.197614  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:54.412425  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:54.541916  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:54.698408  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:54.699009  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:54.912043  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:55.041485  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:55.196212  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:55.196795  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:55.412287  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:55.541805  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:55.695021  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:55.697644  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:55.912403  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:56.041280  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:56.196215  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:56.196845  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:56.412575  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:56.540086  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:56.695689  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:56.698723  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:56.912185  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:57.041278  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:57.195718  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:57.196357  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:57.415350  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:57.541275  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:57.695983  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:57.696509  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:57.912626  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:58.041539  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:58.195051  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:58.196590  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:58.411806  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:58.541000  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:58.696600  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:58.697564  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:58.912135  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:59.041032  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:59.196602  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:59.196745  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:59.412270  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:31:59.542109  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:31:59.696566  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:31:59.697555  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:31:59.912666  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:00.040543  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:00.195854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:00.196971  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:00.411627  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:00.541861  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:00.695413  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:00.697418  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:00.913487  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:01.042233  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:01.195394  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:01.197595  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:01.412418  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:01.541658  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:01.697383  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:01.698671  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:01.914029  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:02.042979  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:02.198534  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:02.198746  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:02.413488  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:02.540091  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:02.699039  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:02.699243  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:02.913260  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:03.042351  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:03.196133  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:03.196797  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:03.412055  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:03.540813  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:03.695810  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:03.696278  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:03.912607  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:04.040336  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:04.195923  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:04.197906  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:04.412339  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:04.541423  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:04.697319  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:04.697522  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:04.911871  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:05.042054  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:05.196538  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:05.197169  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:05.412220  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:05.541589  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:05.695349  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:05.697593  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:05.911341  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:06.041432  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:06.196668  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:06.196868  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:06.412383  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:06.541423  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:06.696264  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:06.696298  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:06.912896  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:07.041463  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:07.196315  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:07.196358  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:07.411993  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:07.540392  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:07.696388  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:07.697104  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:07.915258  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:08.041599  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:08.196372  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:08.197301  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:08.413332  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:08.541386  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:08.700566  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:08.700862  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:08.912947  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:09.042060  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:09.195871  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:09.197176  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:09.411919  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:09.541000  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:09.695999  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:09.697329  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:09.912222  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:10.042718  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:10.196344  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:10.196356  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:10.411381  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:10.541528  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:10.696498  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:10.698344  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:10.912694  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:11.041130  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:11.195351  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:11.197663  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:11.412589  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:11.540341  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:11.697469  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:11.699618  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:11.912519  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:12.041597  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:12.195947  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:12.197653  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:12.412598  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:12.540709  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:12.696715  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:12.698050  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:12.928026  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:13.045349  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:13.197477  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:13.197509  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:13.412404  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:13.542237  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:13.695946  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:13.696615  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:13.911988  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:14.040943  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:14.196098  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:14.197927  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:14.413238  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:14.544438  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:14.696894  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:14.697344  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:14.912008  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:15.040574  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:15.196776  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:15.197552  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:15.411737  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:15.540831  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:15.696509  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:15.698462  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:15.913052  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:16.041218  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:16.195703  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:16.198051  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:16.411991  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:16.541641  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:16.695803  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:16.696905  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:16.911898  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:17.041120  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:17.197033  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:17.197238  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:17.411848  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:17.541354  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:17.695478  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:17.696944  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:17.913147  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:18.042436  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:18.196145  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:18.196396  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:18.411839  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:18.540354  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:18.696245  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:18.697787  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:18.912398  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:19.042406  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:19.196069  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:19.199304  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:19.412956  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:19.544144  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:19.699711  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:19.702208  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:19.911864  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:20.043905  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:20.198903  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:20.199000  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:20.422787  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:20.544150  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:20.700200  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:20.704668  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:20.917309  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:21.046645  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:21.195931  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:21.196243  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:21.413120  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:21.546436  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:21.700169  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:21.700244  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:21.914444  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:22.047410  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:22.200096  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:22.203625  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:22.417682  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:22.546754  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:22.698047  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:22.703046  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:22.914138  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:23.045377  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:23.200843  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:23.201169  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:23.413890  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:23.543372  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:23.696626  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:23.697747  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:23.916852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:24.043284  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:24.196831  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:24.196918  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:24.412927  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:24.541136  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:24.699213  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:24.701183  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:24.914129  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:25.043092  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:25.321913  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:25.323020  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:25.413045  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:25.542232  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:25.698054  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:25.699672  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:25.914677  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:26.045854  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:26.197868  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:26.197922  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:26.411543  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:26.550068  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:26.695076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:26.699427  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:26.912378  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:27.042894  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:27.197017  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:27.199935  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:27.417341  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:27.541748  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:27.695988  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:27.698216  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:27.911305  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:28.043682  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:28.203306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:28.203790  248270 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:32:28.413791  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:28.548187  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:28.705853  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:28.706201  248270 kapi.go:107] duration metric: took 2m30.513642085s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 05:32:28.911698  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:29.040817  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:29.197936  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:29.421552  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:29.541741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:29.696634  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:29.912225  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:30.041603  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:32:30.195204  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:30.411259  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:30.541923  248270 kapi.go:107] duration metric: took 2m30.505431248s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 05:32:30.700458  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:30.916295  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:31.198295  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:31.413624  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:31.701677  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:31.934977  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:32.199414  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:32.418730  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:32.695325  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:32.913157  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:33.197510  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:33.413102  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:32:33.696635  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:33.912976  248270 kapi.go:107] duration metric: took 2m32.004940635s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 05:32:33.914947  248270 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-819501 cluster.
	I1210 05:32:33.916673  248270 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 05:32:33.918159  248270 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 05:32:34.195642  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:34.696627  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:35.196024  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:35.695782  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:36.195094  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:36.696145  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:37.195456  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:37.695683  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:38.240338  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:38.696247  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:39.196496  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:39.695741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:40.196422  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:40.696471  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:41.196332  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:41.695958  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:42.196570  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:42.695829  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:43.195357  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:43.695795  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:44.195089  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:44.697101  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:45.195298  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:45.696306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:46.196188  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:46.697545  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:47.195693  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:47.697106  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:48.196501  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:48.696838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:49.196330  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:49.697408  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:50.196426  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:50.695540  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:51.196892  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:51.696131  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:52.195514  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:52.695825  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:53.195964  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:53.696041  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:54.195223  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:54.695628  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:55.196841  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:55.695740  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:56.194920  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:56.696783  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:57.195842  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:57.696514  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:58.196291  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:58.696032  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:59.195757  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:32:59.695913  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:00.196003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:00.697408  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:01.196132  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:01.696525  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:02.195519  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:02.696676  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:03.196468  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:03.697155  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:04.196040  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:04.695496  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:05.197092  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:05.696051  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:06.194871  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:06.696031  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:07.196751  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:07.697187  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:08.195603  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:08.696317  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:09.196085  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:09.696248  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:10.196156  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:10.695296  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:11.196584  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:11.700018  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:12.195142  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:12.697083  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:13.195224  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:13.695306  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:14.196901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:14.696058  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:15.195574  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:15.696336  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:16.195623  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:16.698100  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:17.197329  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:17.695675  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:18.196683  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:18.697097  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:19.195492  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:19.695645  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:20.197262  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:20.695427  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:21.196795  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:21.697360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:22.196642  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:22.696362  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:23.195333  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:23.695816  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:24.195957  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:24.694821  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:25.195992  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:25.695334  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:26.196484  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:26.697312  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:27.196789  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:27.695382  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:28.197511  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:28.696029  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:29.195379  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:29.696478  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:30.196742  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:30.696617  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:31.196691  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:31.696332  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:32.197105  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:32.695261  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:33.195565  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:33.696412  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:34.196853  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:34.695141  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:35.196017  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:35.694870  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:36.196760  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:36.695716  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:37.197084  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:37.694798  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:38.195481  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:38.696818  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:39.195103  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:39.695287  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:40.196285  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:40.695441  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:41.196901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:41.695151  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:42.196592  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:42.695791  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:43.195539  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:43.697526  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:44.195494  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:44.697258  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:45.195374  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:45.696246  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:46.195360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:46.696595  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:47.195386  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:47.696495  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:48.195396  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:48.696399  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:49.195926  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:49.695752  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:50.196644  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:50.696819  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:51.196463  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:51.695510  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:52.196329  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:52.695117  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:53.195544  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:53.696499  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:54.196995  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:54.695938  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:55.196716  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:55.696111  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:56.195836  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:56.697347  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:57.196443  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:57.696024  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:58.196687  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:58.696411  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:59.195151  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:33:59.696081  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:00.196766  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:00.695605  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:01.196288  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:01.696235  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:02.195823  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:02.695868  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:03.196217  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:03.695561  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:04.195933  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:04.696154  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:05.195390  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:05.696250  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:06.196127  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:06.695227  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:07.200317  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:07.696082  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:08.197235  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:08.696004  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:09.196361  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:09.695240  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:10.196104  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:10.695419  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:11.196319  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:11.694964  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:12.196974  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:12.696337  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:13.195978  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:13.696471  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:14.195269  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:14.695674  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:15.197294  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:15.695137  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:16.196248  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:16.694996  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:17.196381  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:17.695404  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:18.196195  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:18.695758  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:19.196179  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:19.695177  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:20.195565  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:20.696242  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:21.196197  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:21.695191  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:22.196188  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:22.694979  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:23.194913  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:23.697081  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:24.196158  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:24.694728  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:25.196611  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:25.697492  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:26.196964  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:26.696515  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:27.195840  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:27.695719  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:28.196490  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:28.696390  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:29.195290  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:29.695663  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:30.201407  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:30.695974  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:31.197190  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:31.695471  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:32.196744  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:32.696589  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:33.195808  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:33.696661  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:34.196415  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:34.695620  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:35.195699  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:35.696759  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:36.196702  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:36.695614  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:37.194964  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:37.696076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:38.196210  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:38.696076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:39.196613  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:39.696165  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:40.198418  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:40.695943  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:41.197398  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:41.695772  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:42.195040  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:42.696314  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:43.195340  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:43.696359  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:44.196391  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:44.696030  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:45.195397  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:45.696360  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:46.195580  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:46.696472  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:47.196255  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:47.695598  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:48.195578  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:48.696577  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:49.197485  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:49.695598  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:50.196297  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:50.694812  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:51.196752  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:51.697037  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:52.196530  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:52.696049  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:53.196414  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:53.696286  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:54.196246  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:54.696013  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:55.197119  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:55.695940  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:56.195722  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:56.695900  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:57.195405  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:57.694853  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:58.195846  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:58.696065  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:59.196493  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:34:59.695901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:00.196143  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:00.695285  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:01.197326  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:01.694920  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:02.194762  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:02.696331  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:03.195592  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:03.696344  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:04.195284  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:04.695602  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:05.195944  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:05.695625  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:06.195741  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:06.697910  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:07.196058  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:07.694812  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:08.195849  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:08.697178  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:09.195957  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:09.696494  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:10.196796  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:10.696794  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:11.196939  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:11.695777  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:12.196354  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:12.697453  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:13.195090  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:13.695593  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:14.195988  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:14.699457  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:15.196105  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:15.695271  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:16.195646  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:16.696815  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:17.195859  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:17.696286  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:18.195152  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:18.696022  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:19.196238  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:19.696329  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:20.195730  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:20.696445  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:21.196514  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:21.695588  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:22.197806  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:22.697353  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:23.196082  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:23.696133  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:24.196367  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:24.696099  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:25.195852  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:25.695686  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:26.195500  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:26.697384  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:27.196018  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:27.694823  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:28.196076  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:28.695462  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:29.195471  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:29.696297  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:30.196124  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:30.696556  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:31.196283  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:31.695815  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:32.196641  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:32.697074  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:33.195733  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:33.697646  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:34.195935  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:34.696025  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:35.195838  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:35.695200  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:36.196244  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:36.696951  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:37.196003  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:37.695262  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:38.196037  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:38.695111  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:39.195459  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:39.696434  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:40.198620  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:40.697017  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:41.195949  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:41.695731  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:42.195894  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:42.695975  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:43.194970  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:43.695677  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:44.197208  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:44.695958  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:45.196050  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:45.695370  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:46.195901  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:46.695795  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:47.196346  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:47.695774  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:48.194982  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:48.697065  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:49.195134  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:49.694745  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:50.195670  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:50.695951  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:51.196052  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:51.696732  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:52.197195  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:52.696093  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:53.195202  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:53.695405  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:54.196119  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:54.696476  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:55.196403  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:55.695788  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:56.195036  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:56.696466  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:57.198284  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:57.695289  248270 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:35:58.191968  248270 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1210 05:35:58.192004  248270 kapi.go:107] duration metric: took 6m0.000635436s to wait for kubernetes.io/minikube-addons=registry ...
	W1210 05:35:58.192136  248270 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1210 05:35:58.194007  248270 out.go:179] * Enabled addons: inspektor-gadget, ingress-dns, storage-provisioner, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, default-storageclass, registry-creds, metrics-server, yakd, volumesnapshots, ingress, csi-hostpath-driver, gcp-auth
	I1210 05:35:58.195457  248270 addons.go:530] duration metric: took 6m11.428581243s for enable addons: enabled=[inspektor-gadget ingress-dns storage-provisioner amd-gpu-device-plugin cloud-spanner nvidia-device-plugin default-storageclass registry-creds metrics-server yakd volumesnapshots ingress csi-hostpath-driver gcp-auth]
	I1210 05:35:58.195518  248270 start.go:247] waiting for cluster config update ...
	I1210 05:35:58.195551  248270 start.go:256] writing updated cluster config ...
	I1210 05:35:58.195954  248270 ssh_runner.go:195] Run: rm -f paused
	I1210 05:35:58.205700  248270 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:35:58.211367  248270 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lwtl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.216998  248270 pod_ready.go:94] pod "coredns-66bc5c9577-lwtl7" is "Ready"
	I1210 05:35:58.217026  248270 pod_ready.go:86] duration metric: took 5.6329ms for pod "coredns-66bc5c9577-lwtl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.219507  248270 pod_ready.go:83] waiting for pod "etcd-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.225094  248270 pod_ready.go:94] pod "etcd-addons-819501" is "Ready"
	I1210 05:35:58.225120  248270 pod_ready.go:86] duration metric: took 5.593139ms for pod "etcd-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.227244  248270 pod_ready.go:83] waiting for pod "kube-apiserver-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.231410  248270 pod_ready.go:94] pod "kube-apiserver-addons-819501" is "Ready"
	I1210 05:35:58.231431  248270 pod_ready.go:86] duration metric: took 4.167307ms for pod "kube-apiserver-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.234812  248270 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.610624  248270 pod_ready.go:94] pod "kube-controller-manager-addons-819501" is "Ready"
	I1210 05:35:58.610654  248270 pod_ready.go:86] duration metric: took 375.820379ms for pod "kube-controller-manager-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:58.811334  248270 pod_ready.go:83] waiting for pod "kube-proxy-ngpzv" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:59.211461  248270 pod_ready.go:94] pod "kube-proxy-ngpzv" is "Ready"
	I1210 05:35:59.211491  248270 pod_ready.go:86] duration metric: took 400.130316ms for pod "kube-proxy-ngpzv" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:59.410382  248270 pod_ready.go:83] waiting for pod "kube-scheduler-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:59.811154  248270 pod_ready.go:94] pod "kube-scheduler-addons-819501" is "Ready"
	I1210 05:35:59.811187  248270 pod_ready.go:86] duration metric: took 400.778411ms for pod "kube-scheduler-addons-819501" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:35:59.811204  248270 pod_ready.go:40] duration metric: took 1.605466877s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:35:59.859434  248270 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 05:35:59.861511  248270 out.go:179] * Done! kubectl is now configured to use "addons-819501" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.146775106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765345285146747419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:457883,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=830ed113-b93c-41db-a139-ad05151e84cb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.147672501Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37d79868-ae6e-48ad-93df-e50ec7417187 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.147732560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37d79868-ae6e-48ad-93df-e50ec7417187 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.148016585Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25e1deb92b90406521b7a79e047d89c517cd035a44e88d682c0941a2f0d94ceb,PodSandboxId:ba29e6b659914c39c0bee679c2c220da4d0fd56110470e69c0b07a5dc5a75614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345033128650784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 390b2e80-0538-4ebe-ae5c-2e24388c48e0,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73d958852375d517ec8e880e264554b882f25961e3b94c83033ce6fdf91dfcd,PodSandboxId:14f0fd53f12b8b26bc3ff88bf56950e4a57714e825ab5e0f66c12e4379f63f4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765344963193852703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 479ad0f6-afd3-427d-9618-0e77a36d2f86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668d0c1d28737b284756a644d3460fac7e536f89dc7c5792d6ff78707639682,PodSandboxId:1e43763de09da6e66f05a5b77ff6a2d0f556cf6c72f5baacc6667687d5ad0c9a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765344666031370479,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vsz96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db81abdfb84eb3c48379ba982a1f44c8e7d2887cfe2d24f6d0d020e96bff32,PodSandboxId:a5a64939f7ce8b3d1d4298014698f9340fd0246eb2520d98dd053abe11a1aaaa,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765344634678579440,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-25pr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945db864-8d9f-4e37-b866-28b
9f77d42c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5e7c29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320,PodSandboxId:3380b4d1273bef6f6154c47e1dadd09e5631fc6543ce2cce181b3c0c1c09ad4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765344597013974389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storag
e-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ed88a2-563b-4ee6-9a9a-94669a45bd2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203b77791ed58bed3f44112a9e6cd0fd2fb732df525a634b3ada0ef925ee220c,PodSandboxId:9061092924ee0fdfef082b98a2aa59defc43ab9ccd0f5374e3dd05a3a81d5354,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765344596045116432,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugi
n,io.kubernetes.pod.name: amd-gpu-device-plugin-xwmk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca338a7b-5d2c-4894-a615-0224cddd49ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d9dc695ee76bd24,PodSandboxId:5c479d4bba0cae8bb7cf900777ec470fcb9191f52aa792281fea13aabd9dea07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765344589483389368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: co
redns-66bc5c9577-lwtl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84b8912-1587-45b3-956c-791ea7ec71c6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68,PodSandboxId:1b437dd96d11016f50ea95521ddb655effee608acd467d4c5b86608938bce0e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8
e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765344588559660009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngpzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c58eba-0463-42c9-a9d6-3c579349bd49,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9,PodSandboxId:f274a008098638e228e863ee3091aab6eb97a0b43dd678f1969a077a1a9cdd3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1b
e02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765344575603886543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa13790632d350c6bc30d2faa0b6f981,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948,PodSandboxId:27b140f7dc29d6de2662a56308de71b940b56f046ecec1f25c3af063d060eba9,Metadata:&Co
ntainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765344575527641867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0266aeda1eb6dc0732ac0ca983358e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f05e43ec5e70f9d6ff09ee
4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f,PodSandboxId:3310898dd2e7bb9fa4eaa4121131d47401c80bff34c6a4866a34165c6378f9b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765344575558920700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2613b60cb2b81953748c1f1f1ecd406,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a,PodSandboxId:079499566b48e559bfd84ff27fd1e0980238a3d2a6146a461496ea2fb96ba2fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765344575523607276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9414453fd34af8fe84f77d6b515bc5e6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37d79868-ae6e-48ad-93df-e50ec7417187 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.187732801Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eac0bd6c-4a2a-4945-8901-87a04eebcbd0 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.187929889Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eac0bd6c-4a2a-4945-8901-87a04eebcbd0 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.189901506Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8be7cdfd-6ba6-4c88-93e4-dc78d1b16bfc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.190993152Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765345285190960366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:457883,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8be7cdfd-6ba6-4c88-93e4-dc78d1b16bfc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.192367865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bda1ee7-b6bd-440b-b9d6-7a8a4e6696f2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.192503824Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bda1ee7-b6bd-440b-b9d6-7a8a4e6696f2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.192804000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25e1deb92b90406521b7a79e047d89c517cd035a44e88d682c0941a2f0d94ceb,PodSandboxId:ba29e6b659914c39c0bee679c2c220da4d0fd56110470e69c0b07a5dc5a75614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345033128650784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 390b2e80-0538-4ebe-ae5c-2e24388c48e0,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73d958852375d517ec8e880e264554b882f25961e3b94c83033ce6fdf91dfcd,PodSandboxId:14f0fd53f12b8b26bc3ff88bf56950e4a57714e825ab5e0f66c12e4379f63f4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765344963193852703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 479ad0f6-afd3-427d-9618-0e77a36d2f86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668d0c1d28737b284756a644d3460fac7e536f89dc7c5792d6ff78707639682,PodSandboxId:1e43763de09da6e66f05a5b77ff6a2d0f556cf6c72f5baacc6667687d5ad0c9a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765344666031370479,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vsz96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db81abdfb84eb3c48379ba982a1f44c8e7d2887cfe2d24f6d0d020e96bff32,PodSandboxId:a5a64939f7ce8b3d1d4298014698f9340fd0246eb2520d98dd053abe11a1aaaa,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765344634678579440,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-25pr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945db864-8d9f-4e37-b866-28b
9f77d42c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5e7c29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320,PodSandboxId:3380b4d1273bef6f6154c47e1dadd09e5631fc6543ce2cce181b3c0c1c09ad4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765344597013974389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storag
e-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ed88a2-563b-4ee6-9a9a-94669a45bd2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203b77791ed58bed3f44112a9e6cd0fd2fb732df525a634b3ada0ef925ee220c,PodSandboxId:9061092924ee0fdfef082b98a2aa59defc43ab9ccd0f5374e3dd05a3a81d5354,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765344596045116432,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugi
n,io.kubernetes.pod.name: amd-gpu-device-plugin-xwmk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca338a7b-5d2c-4894-a615-0224cddd49ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d9dc695ee76bd24,PodSandboxId:5c479d4bba0cae8bb7cf900777ec470fcb9191f52aa792281fea13aabd9dea07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765344589483389368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: co
redns-66bc5c9577-lwtl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84b8912-1587-45b3-956c-791ea7ec71c6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68,PodSandboxId:1b437dd96d11016f50ea95521ddb655effee608acd467d4c5b86608938bce0e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8
e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765344588559660009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngpzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c58eba-0463-42c9-a9d6-3c579349bd49,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9,PodSandboxId:f274a008098638e228e863ee3091aab6eb97a0b43dd678f1969a077a1a9cdd3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1b
e02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765344575603886543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa13790632d350c6bc30d2faa0b6f981,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948,PodSandboxId:27b140f7dc29d6de2662a56308de71b940b56f046ecec1f25c3af063d060eba9,Metadata:&Co
ntainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765344575527641867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0266aeda1eb6dc0732ac0ca983358e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f05e43ec5e70f9d6ff09ee
4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f,PodSandboxId:3310898dd2e7bb9fa4eaa4121131d47401c80bff34c6a4866a34165c6378f9b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765344575558920700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2613b60cb2b81953748c1f1f1ecd406,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a,PodSandboxId:079499566b48e559bfd84ff27fd1e0980238a3d2a6146a461496ea2fb96ba2fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765344575523607276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9414453fd34af8fe84f77d6b515bc5e6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bda1ee7-b6bd-440b-b9d6-7a8a4e6696f2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.238817702Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec988855-f4a0-4f57-93ea-8de6e94e4bb3 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.238916798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec988855-f4a0-4f57-93ea-8de6e94e4bb3 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.240367910Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cec0581e-31be-47d3-a465-5ef462061e7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.241874463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765345285241844968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:457883,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cec0581e-31be-47d3-a465-5ef462061e7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.242781436Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50fe6068-cd37-45b7-86ef-f1000f1811fa name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.242859026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50fe6068-cd37-45b7-86ef-f1000f1811fa name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.243197596Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25e1deb92b90406521b7a79e047d89c517cd035a44e88d682c0941a2f0d94ceb,PodSandboxId:ba29e6b659914c39c0bee679c2c220da4d0fd56110470e69c0b07a5dc5a75614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345033128650784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 390b2e80-0538-4ebe-ae5c-2e24388c48e0,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73d958852375d517ec8e880e264554b882f25961e3b94c83033ce6fdf91dfcd,PodSandboxId:14f0fd53f12b8b26bc3ff88bf56950e4a57714e825ab5e0f66c12e4379f63f4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765344963193852703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 479ad0f6-afd3-427d-9618-0e77a36d2f86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668d0c1d28737b284756a644d3460fac7e536f89dc7c5792d6ff78707639682,PodSandboxId:1e43763de09da6e66f05a5b77ff6a2d0f556cf6c72f5baacc6667687d5ad0c9a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765344666031370479,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vsz96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db81abdfb84eb3c48379ba982a1f44c8e7d2887cfe2d24f6d0d020e96bff32,PodSandboxId:a5a64939f7ce8b3d1d4298014698f9340fd0246eb2520d98dd053abe11a1aaaa,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765344634678579440,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-25pr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945db864-8d9f-4e37-b866-28b
9f77d42c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5e7c29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320,PodSandboxId:3380b4d1273bef6f6154c47e1dadd09e5631fc6543ce2cce181b3c0c1c09ad4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765344597013974389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storag
e-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ed88a2-563b-4ee6-9a9a-94669a45bd2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203b77791ed58bed3f44112a9e6cd0fd2fb732df525a634b3ada0ef925ee220c,PodSandboxId:9061092924ee0fdfef082b98a2aa59defc43ab9ccd0f5374e3dd05a3a81d5354,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765344596045116432,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugi
n,io.kubernetes.pod.name: amd-gpu-device-plugin-xwmk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca338a7b-5d2c-4894-a615-0224cddd49ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d9dc695ee76bd24,PodSandboxId:5c479d4bba0cae8bb7cf900777ec470fcb9191f52aa792281fea13aabd9dea07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765344589483389368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: co
redns-66bc5c9577-lwtl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84b8912-1587-45b3-956c-791ea7ec71c6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68,PodSandboxId:1b437dd96d11016f50ea95521ddb655effee608acd467d4c5b86608938bce0e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8
e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765344588559660009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngpzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c58eba-0463-42c9-a9d6-3c579349bd49,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9,PodSandboxId:f274a008098638e228e863ee3091aab6eb97a0b43dd678f1969a077a1a9cdd3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1b
e02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765344575603886543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa13790632d350c6bc30d2faa0b6f981,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948,PodSandboxId:27b140f7dc29d6de2662a56308de71b940b56f046ecec1f25c3af063d060eba9,Metadata:&Co
ntainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765344575527641867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0266aeda1eb6dc0732ac0ca983358e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f05e43ec5e70f9d6ff09ee
4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f,PodSandboxId:3310898dd2e7bb9fa4eaa4121131d47401c80bff34c6a4866a34165c6378f9b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765344575558920700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2613b60cb2b81953748c1f1f1ecd406,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a,PodSandboxId:079499566b48e559bfd84ff27fd1e0980238a3d2a6146a461496ea2fb96ba2fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765344575523607276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9414453fd34af8fe84f77d6b515bc5e6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50fe6068-cd37-45b7-86ef-f1000f1811fa name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.278147422Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=13532f63-3f7a-4604-8448-0c212f327b5c name=/runtime.v1.RuntimeService/Version
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.278221189Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13532f63-3f7a-4604-8448-0c212f327b5c name=/runtime.v1.RuntimeService/Version
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.280602242Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e006c26e-118d-4064-8bb1-83fe1093270f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.282340749Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765345285282247070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:457883,},InodesUsed:&UInt64Value{Value:162,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e006c26e-118d-4064-8bb1-83fe1093270f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.283249904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=203dc2c4-7629-443f-9885-29ee75b87ce8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.283381259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=203dc2c4-7629-443f-9885-29ee75b87ce8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:41:25 addons-819501 crio[812]: time="2025-12-10 05:41:25.283764097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25e1deb92b90406521b7a79e047d89c517cd035a44e88d682c0941a2f0d94ceb,PodSandboxId:ba29e6b659914c39c0bee679c2c220da4d0fd56110470e69c0b07a5dc5a75614,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345033128650784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 390b2e80-0538-4ebe-ae5c-2e24388c48e0,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73d958852375d517ec8e880e264554b882f25961e3b94c83033ce6fdf91dfcd,PodSandboxId:14f0fd53f12b8b26bc3ff88bf56950e4a57714e825ab5e0f66c12e4379f63f4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765344963193852703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 479ad0f6-afd3-427d-9618-0e77a36d2f86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1668d0c1d28737b284756a644d3460fac7e536f89dc7c5792d6ff78707639682,PodSandboxId:1e43763de09da6e66f05a5b77ff6a2d0f556cf6c72f5baacc6667687d5ad0c9a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765344666031370479,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vsz96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5d8fbcbe-12b5-4e1c-bbb5-f208e41d45d3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db81abdfb84eb3c48379ba982a1f44c8e7d2887cfe2d24f6d0d020e96bff32,PodSandboxId:a5a64939f7ce8b3d1d4298014698f9340fd0246eb2520d98dd053abe11a1aaaa,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1765344634678579440,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-25pr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945db864-8d9f-4e37-b866-28b
9f77d42c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5e7c29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320,PodSandboxId:3380b4d1273bef6f6154c47e1dadd09e5631fc6543ce2cce181b3c0c1c09ad4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765344597013974389,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storag
e-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76ed88a2-563b-4ee6-9a9a-94669a45bd2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203b77791ed58bed3f44112a9e6cd0fd2fb732df525a634b3ada0ef925ee220c,PodSandboxId:9061092924ee0fdfef082b98a2aa59defc43ab9ccd0f5374e3dd05a3a81d5354,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765344596045116432,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugi
n,io.kubernetes.pod.name: amd-gpu-device-plugin-xwmk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca338a7b-5d2c-4894-a615-0224cddd49ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d9dc695ee76bd24,PodSandboxId:5c479d4bba0cae8bb7cf900777ec470fcb9191f52aa792281fea13aabd9dea07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765344589483389368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: co
redns-66bc5c9577-lwtl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84b8912-1587-45b3-956c-791ea7ec71c6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68,PodSandboxId:1b437dd96d11016f50ea95521ddb655effee608acd467d4c5b86608938bce0e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8
e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765344588559660009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngpzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75c58eba-0463-42c9-a9d6-3c579349bd49,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9,PodSandboxId:f274a008098638e228e863ee3091aab6eb97a0b43dd678f1969a077a1a9cdd3d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1b
e02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765344575603886543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa13790632d350c6bc30d2faa0b6f981,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948,PodSandboxId:27b140f7dc29d6de2662a56308de71b940b56f046ecec1f25c3af063d060eba9,Metadata:&Co
ntainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765344575527641867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0266aeda1eb6dc0732ac0ca983358e,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f05e43ec5e70f9d6ff09ee
4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f,PodSandboxId:3310898dd2e7bb9fa4eaa4121131d47401c80bff34c6a4866a34165c6378f9b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765344575558920700,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2613b60cb2b81953748c1f1f1ecd406,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a,PodSandboxId:079499566b48e559bfd84ff27fd1e0980238a3d2a6146a461496ea2fb96ba2fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765344575523607276,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-819501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9414453fd34af8fe84f77d6b515bc5e6,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=203dc2c4-7629-443f-9885-29ee75b87ce8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                       NAMESPACE
	25e1deb92b904       public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9                 4 minutes ago       Running             nginx                     0                   ba29e6b659914       nginx                                     default
	c73d958852375       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                5 minutes ago       Running             busybox                   0                   14f0fd53f12b8       busybox                                   default
	1668d0c1d2873       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef   10 minutes ago      Running             local-path-provisioner    0                   1e43763de09da       local-path-provisioner-648f6765c9-vsz96   local-path-storage
	82db81abdfb84       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac    10 minutes ago      Running             registry-proxy            0                   a5a64939f7ce8       registry-proxy-25pr7                      kube-system
	cb5e7c29a0f38       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   11 minutes ago      Running             storage-provisioner       0                   3380b4d1273be       storage-provisioner                       kube-system
	203b77791ed58       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f           11 minutes ago      Running             amd-gpu-device-plugin     0                   9061092924ee0       amd-gpu-device-plugin-xwmk6               kube-system
	56051fcb51898       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                   11 minutes ago      Running             coredns                   0                   5c479d4bba0ca       coredns-66bc5c9577-lwtl7                  kube-system
	6bca39dd5c266       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                   11 minutes ago      Running             kube-proxy                0                   1b437dd96d110       kube-proxy-ngpzv                          kube-system
	1326c7547c796       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                   11 minutes ago      Running             kube-scheduler            0                   f274a00809863       kube-scheduler-addons-819501              kube-system
	f05e43ec5e70f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                   11 minutes ago      Running             etcd                      0                   3310898dd2e7b       etcd-addons-819501                        kube-system
	7c800fe0c31f2       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                   11 minutes ago      Running             kube-controller-manager   0                   27b140f7dc29d       kube-controller-manager-addons-819501     kube-system
	633a185de0b3b       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                   11 minutes ago      Running             kube-apiserver            0                   079499566b48e       kube-apiserver-addons-819501              kube-system
	
	
	==> coredns [56051fcb51898cbc9faf5ef6c5f8d47d57e6a448a510b96f7d9dc695ee76bd24] <==
	[INFO] 10.244.0.7:57262 - 41859 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000343409s
	[INFO] 10.244.0.7:45242 - 12012 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000315247s
	[INFO] 10.244.0.7:45242 - 33387 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000130848s
	[INFO] 10.244.0.7:45242 - 38487 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000145448s
	[INFO] 10.244.0.7:45242 - 7399 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00010296s
	[INFO] 10.244.0.7:45242 - 25165 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000134205s
	[INFO] 10.244.0.7:45242 - 45292 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000223557s
	[INFO] 10.244.0.7:45242 - 13184 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000141017s
	[INFO] 10.244.0.7:45242 - 35883 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00012141s
	[INFO] 10.244.0.7:38258 - 4001 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000283684s
	[INFO] 10.244.0.7:38258 - 14568 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000364749s
	[INFO] 10.244.0.7:38258 - 24011 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000107403s
	[INFO] 10.244.0.7:38258 - 36143 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000248384s
	[INFO] 10.244.0.7:38258 - 59456 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000125129s
	[INFO] 10.244.0.7:38258 - 17993 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000130137s
	[INFO] 10.244.0.7:38258 - 19076 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000094655s
	[INFO] 10.244.0.7:38258 - 51817 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000230046s
	[INFO] 10.244.0.7:51994 - 26272 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000743505s
	[INFO] 10.244.0.7:51994 - 57505 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000150451s
	[INFO] 10.244.0.7:51994 - 51783 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000125818s
	[INFO] 10.244.0.7:51994 - 32247 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000184366s
	[INFO] 10.244.0.7:51994 - 41436 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000107245s
	[INFO] 10.244.0.7:51994 - 16539 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00008877s
	[INFO] 10.244.0.7:51994 - 37707 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000114264s
	[INFO] 10.244.0.7:51994 - 41119 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000336365s
	
	
	==> describe nodes <==
	Name:               addons-819501
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-819501
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=addons-819501
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T05_29_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-819501
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 05:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-819501
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 05:41:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 05:37:43 +0000   Wed, 10 Dec 2025 05:29:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 05:37:43 +0000   Wed, 10 Dec 2025 05:29:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 05:37:43 +0000   Wed, 10 Dec 2025 05:29:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 05:37:43 +0000   Wed, 10 Dec 2025 05:29:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.227
	  Hostname:    addons-819501
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba6b4ebfa05046a9ba182a04e8831219
	  System UUID:                ba6b4ebf-a050-46a9-ba18-2a04e8831219
	  Boot ID:                    216e7b9f-8c01-493d-bad4-cf3938ee1b07
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  default                     hello-world-app-5d498dc89-t67b9            0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 amd-gpu-device-plugin-xwmk6                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-lwtl7                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-addons-819501                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-addons-819501               250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-819501      200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-ngpzv                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-819501               100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 registry-6b586f9694-lkhvn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 registry-proxy-25pr7                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-648f6765c9-vsz96    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-819501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-819501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-819501 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node addons-819501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node addons-819501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node addons-819501 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11m                kubelet          Node addons-819501 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node addons-819501 event: Registered Node addons-819501 in Controller
	
	
	==> dmesg <==
	[Dec10 05:32] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.000055] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.804071] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.010431] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.587817] kauditd_printk_skb: 11 callbacks suppressed
	[Dec10 05:36] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.071354] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.011099] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.409106] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.223792] kauditd_printk_skb: 72 callbacks suppressed
	[  +0.701216] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.750190] kauditd_printk_skb: 20 callbacks suppressed
	[  +4.999003] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.624936] kauditd_printk_skb: 22 callbacks suppressed
	[Dec10 05:37] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.180251] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.341207] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000049] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.855665] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.223501] kauditd_printk_skb: 127 callbacks suppressed
	[Dec10 05:38] kauditd_printk_skb: 15 callbacks suppressed
	[Dec10 05:39] kauditd_printk_skb: 9 callbacks suppressed
	[ +13.888203] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.052746] kauditd_printk_skb: 46 callbacks suppressed
	[Dec10 05:40] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [f05e43ec5e70f9d6ff09ee4c9d3ae27499bd9f5d60c920e0e39cd9a7463cc12f] <==
	{"level":"warn","ts":"2025-12-10T05:30:28.890002Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.047857ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:30:28.890050Z","caller":"traceutil/trace.go:172","msg":"trace[629805527] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:992; }","duration":"200.098103ms","start":"2025-12-10T05:30:28.689944Z","end":"2025-12-10T05:30:28.890042Z","steps":["trace[629805527] 'agreement among raft nodes before linearized reading'  (duration: 200.027767ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:31:14.011871Z","caller":"traceutil/trace.go:172","msg":"trace[1685321480] transaction","detail":"{read_only:false; response_revision:1115; number_of_response:1; }","duration":"232.957389ms","start":"2025-12-10T05:31:13.778902Z","end":"2025-12-10T05:31:14.011860Z","steps":["trace[1685321480] 'process raft request'  (duration: 232.862929ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:31:14.011968Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.972115ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:31:14.011995Z","caller":"traceutil/trace.go:172","msg":"trace[2111277061] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1115; }","duration":"108.022783ms","start":"2025-12-10T05:31:13.903967Z","end":"2025-12-10T05:31:14.011990Z","steps":["trace[2111277061] 'agreement among raft nodes before linearized reading'  (duration: 107.945145ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:31:14.011799Z","caller":"traceutil/trace.go:172","msg":"trace[1752202248] linearizableReadLoop","detail":"{readStateIndex:1150; appliedIndex:1150; }","duration":"107.780261ms","start":"2025-12-10T05:31:13.903991Z","end":"2025-12-10T05:31:14.011771Z","steps":["trace[1752202248] 'read index received'  (duration: 107.774382ms)","trace[1752202248] 'applied index is now lower than readState.Index'  (duration: 5.187µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T05:32:25.305922Z","caller":"traceutil/trace.go:172","msg":"trace[1251573471] linearizableReadLoop","detail":"{readStateIndex:1319; appliedIndex:1319; }","duration":"121.87535ms","start":"2025-12-10T05:32:25.184006Z","end":"2025-12-10T05:32:25.305882Z","steps":["trace[1251573471] 'read index received'  (duration: 121.869174ms)","trace[1251573471] 'applied index is now lower than readState.Index'  (duration: 5.013µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:32:25.306150Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.098039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:32:25.306183Z","caller":"traceutil/trace.go:172","msg":"trace[135230896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1269; }","duration":"122.173364ms","start":"2025-12-10T05:32:25.184001Z","end":"2025-12-10T05:32:25.306174Z","steps":["trace[135230896] 'agreement among raft nodes before linearized reading'  (duration: 122.073487ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:32:25.306204Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.992998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:32:25.306322Z","caller":"traceutil/trace.go:172","msg":"trace[181149218] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1269; }","duration":"122.021433ms","start":"2025-12-10T05:32:25.184200Z","end":"2025-12-10T05:32:25.306222Z","steps":["trace[181149218] 'agreement among raft nodes before linearized reading'  (duration: 121.979708ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:32:25.306014Z","caller":"traceutil/trace.go:172","msg":"trace[1094354552] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"172.845984ms","start":"2025-12-10T05:32:25.133156Z","end":"2025-12-10T05:32:25.306002Z","steps":["trace[1094354552] 'process raft request'  (duration: 172.745468ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:32:25.306478Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.446698ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:32:25.306494Z","caller":"traceutil/trace.go:172","msg":"trace[180843504] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1269; }","duration":"112.467638ms","start":"2025-12-10T05:32:25.194022Z","end":"2025-12-10T05:32:25.306490Z","steps":["trace[180843504] 'agreement among raft nodes before linearized reading'  (duration: 112.436939ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:32:38.226558Z","caller":"traceutil/trace.go:172","msg":"trace[587922016] transaction","detail":"{read_only:false; response_revision:1324; number_of_response:1; }","duration":"227.161495ms","start":"2025-12-10T05:32:37.999377Z","end":"2025-12-10T05:32:38.226539Z","steps":["trace[587922016] 'process raft request'  (duration: 226.986721ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:36:40.519985Z","caller":"traceutil/trace.go:172","msg":"trace[1445196895] linearizableReadLoop","detail":"{readStateIndex:1981; appliedIndex:1981; }","duration":"234.877958ms","start":"2025-12-10T05:36:40.285064Z","end":"2025-12-10T05:36:40.519942Z","steps":["trace[1445196895] 'read index received'  (duration: 234.872722ms)","trace[1445196895] 'applied index is now lower than readState.Index'  (duration: 4.502µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:36:40.520340Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"235.154055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:36:40.520246Z","caller":"traceutil/trace.go:172","msg":"trace[297048297] transaction","detail":"{read_only:false; response_revision:1873; number_of_response:1; }","duration":"262.670866ms","start":"2025-12-10T05:36:40.257562Z","end":"2025-12-10T05:36:40.520233Z","steps":["trace[297048297] 'process raft request'  (duration: 262.516618ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:36:40.520381Z","caller":"traceutil/trace.go:172","msg":"trace[281380984] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1872; }","duration":"235.311996ms","start":"2025-12-10T05:36:40.285059Z","end":"2025-12-10T05:36:40.520371Z","steps":["trace[281380984] 'agreement among raft nodes before linearized reading'  (duration: 235.122733ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:36:40.520595Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.849704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:36:40.520613Z","caller":"traceutil/trace.go:172","msg":"trace[1952149145] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1873; }","duration":"150.87196ms","start":"2025-12-10T05:36:40.369736Z","end":"2025-12-10T05:36:40.520608Z","steps":["trace[1952149145] 'agreement among raft nodes before linearized reading'  (duration: 150.835739ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:36:55.773886Z","caller":"traceutil/trace.go:172","msg":"trace[380006586] transaction","detail":"{read_only:false; response_revision:1936; number_of_response:1; }","duration":"170.469213ms","start":"2025-12-10T05:36:55.603390Z","end":"2025-12-10T05:36:55.773859Z","steps":["trace[380006586] 'process raft request'  (duration: 169.093512ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:39:36.999194Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1514}
	{"level":"info","ts":"2025-12-10T05:39:37.126471Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1514,"took":"125.267941ms","hash":3277531955,"current-db-size-bytes":6422528,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":4313088,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2025-12-10T05:39:37.126557Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3277531955,"revision":1514,"compact-revision":-1}
	
	
	==> kernel <==
	 05:41:25 up 12 min,  0 users,  load average: 0.37, 0.64, 0.56
	Linux addons-819501 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [633a185de0b3b1805291a6b501a068c683ec3e966af359fe1ea6780aa717bc5a] <==
	E1210 05:30:25.400807       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 05:30:25.402016       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 05:30:25.766164       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 05:36:09.659812       1 conn.go:339] Error on socket receive: read tcp 192.168.50.227:8443->192.168.50.1:43450: use of closed network connection
	E1210 05:36:09.874699       1 conn.go:339] Error on socket receive: read tcp 192.168.50.227:8443->192.168.50.1:43486: use of closed network connection
	I1210 05:36:19.186903       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.102.154"}
	I1210 05:36:26.827800       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1210 05:37:05.234793       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1210 05:37:05.460234       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.93.167"}
	I1210 05:37:20.960481       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1210 05:37:37.370348       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.370420       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:37:37.400032       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.400095       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:37:37.440464       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.440520       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:37:37.462791       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.462856       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1210 05:37:37.487024       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1210 05:37:37.487088       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1210 05:37:38.441153       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1210 05:37:38.487618       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1210 05:37:38.518385       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1210 05:39:32.596673       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.118.237"}
	I1210 05:39:38.664950       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [7c800fe0c31f20e4a21fa2a16d1d3df0bede8a1616a6bdfdb35309a80eff0948] <==
	E1210 05:38:19.595205       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:38:19.596663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:38:45.549579       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:38:45.552047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:38:49.253227       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:38:49.254692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:39:02.992372       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:39:02.993613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:39:28.832479       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:39:28.833625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:39:36.422519       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:39:36.423840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:39:37.275765       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:39:37.277570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1210 05:39:47.168907       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	E1210 05:40:26.694850       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:40:26.696003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:40:26.748489       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:40:26.749593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:40:37.079871       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:40:37.081093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:40:59.516011       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:40:59.517476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1210 05:41:06.818849       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1210 05:41:06.820228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [6bca39dd5c266678379788c5509d445f2c7f0618017905e8780d1f277e1a2f68] <==
	I1210 05:29:49.296578       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 05:29:49.398048       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 05:29:49.398087       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.227"]
	E1210 05:29:49.398156       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 05:29:49.810599       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 05:29:49.810670       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 05:29:49.810702       1 server_linux.go:132] "Using iptables Proxier"
	I1210 05:29:49.903119       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 05:29:49.904430       1 server.go:527] "Version info" version="v1.34.3"
	I1210 05:29:49.904446       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:29:49.914988       1 config.go:200] "Starting service config controller"
	I1210 05:29:49.915001       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 05:29:49.915020       1 config.go:106] "Starting endpoint slice config controller"
	I1210 05:29:49.915023       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 05:29:49.915033       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 05:29:49.915036       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 05:29:49.917483       1 config.go:309] "Starting node config controller"
	I1210 05:29:49.917744       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 05:29:49.917814       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 05:29:50.016054       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 05:29:50.016117       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 05:29:50.016149       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1326c7547c7963b52bec57cc6cc714607ace502af9ebbaf0c16fa2f43341eed9] <==
	I1210 05:29:39.754860       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:29:39.763116       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:29:39.763203       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:29:39.764832       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 05:29:39.765071       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 05:29:39.765817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 05:29:39.769470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 05:29:39.772002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:29:39.772200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:29:39.772466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 05:29:39.772723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 05:29:39.773020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 05:29:39.773348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 05:29:39.773463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:29:39.773482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 05:29:39.773493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 05:29:39.777582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:29:39.777709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:29:39.777771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 05:29:39.778948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 05:29:39.779052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 05:29:39.779103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 05:29:39.779170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:29:39.779217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1210 05:29:41.063910       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 05:41:02 addons-819501 kubelet[2254]: E1210 05:41:02.283849    2254 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345262283230642  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:41:02 addons-819501 kubelet[2254]: E1210 05:41:02.283880    2254 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345262283230642  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:41:06 addons-819501 kubelet[2254]: I1210 05:41:06.850081    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-lkhvn" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:41:06 addons-819501 kubelet[2254]: E1210 05:41:06.851636    2254 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-lkhvn" podUID="0a8387d7-19c7-49cd-8425-48c60f2e70ae"
	Dec 10 05:41:12 addons-819501 kubelet[2254]: E1210 05:41:12.289956    2254 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345272288385450  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:41:12 addons-819501 kubelet[2254]: E1210 05:41:12.290002    2254 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345272288385450  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:41:17 addons-819501 kubelet[2254]: E1210 05:41:17.315072    2254 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 10 05:41:17 addons-819501 kubelet[2254]: E1210 05:41:17.315132    2254 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 10 05:41:17 addons-819501 kubelet[2254]: E1210 05:41:17.316619    2254 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042_local-path-storage(f7c4d6ff-87f7-45e9-a730-2f343d7472fa): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 10 05:41:17 addons-819501 kubelet[2254]: E1210 05:41:17.316776    2254 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-55257a16-40ec-4ba5-85bc-b3ed60007042" podUID="f7c4d6ff-87f7-45e9-a730-2f343d7472fa"
	Dec 10 05:41:19 addons-819501 kubelet[2254]: I1210 05:41:19.230207    2254 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnr8v\" (UniqueName: \"kubernetes.io/projected/f7c4d6ff-87f7-45e9-a730-2f343d7472fa-kube-api-access-rnr8v\") pod \"f7c4d6ff-87f7-45e9-a730-2f343d7472fa\" (UID: \"f7c4d6ff-87f7-45e9-a730-2f343d7472fa\") "
	Dec 10 05:41:19 addons-819501 kubelet[2254]: I1210 05:41:19.230339    2254 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f7c4d6ff-87f7-45e9-a730-2f343d7472fa-data\") pod \"f7c4d6ff-87f7-45e9-a730-2f343d7472fa\" (UID: \"f7c4d6ff-87f7-45e9-a730-2f343d7472fa\") "
	Dec 10 05:41:19 addons-819501 kubelet[2254]: I1210 05:41:19.230366    2254 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f7c4d6ff-87f7-45e9-a730-2f343d7472fa-script\") pod \"f7c4d6ff-87f7-45e9-a730-2f343d7472fa\" (UID: \"f7c4d6ff-87f7-45e9-a730-2f343d7472fa\") "
	Dec 10 05:41:19 addons-819501 kubelet[2254]: I1210 05:41:19.231081    2254 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c4d6ff-87f7-45e9-a730-2f343d7472fa-script" (OuterVolumeSpecName: "script") pod "f7c4d6ff-87f7-45e9-a730-2f343d7472fa" (UID: "f7c4d6ff-87f7-45e9-a730-2f343d7472fa"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 10 05:41:19 addons-819501 kubelet[2254]: I1210 05:41:19.231377    2254 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7c4d6ff-87f7-45e9-a730-2f343d7472fa-data" (OuterVolumeSpecName: "data") pod "f7c4d6ff-87f7-45e9-a730-2f343d7472fa" (UID: "f7c4d6ff-87f7-45e9-a730-2f343d7472fa"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 10 05:41:19 addons-819501 kubelet[2254]: I1210 05:41:19.233800    2254 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c4d6ff-87f7-45e9-a730-2f343d7472fa-kube-api-access-rnr8v" (OuterVolumeSpecName: "kube-api-access-rnr8v") pod "f7c4d6ff-87f7-45e9-a730-2f343d7472fa" (UID: "f7c4d6ff-87f7-45e9-a730-2f343d7472fa"). InnerVolumeSpecName "kube-api-access-rnr8v". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 10 05:41:19 addons-819501 kubelet[2254]: I1210 05:41:19.330867    2254 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rnr8v\" (UniqueName: \"kubernetes.io/projected/f7c4d6ff-87f7-45e9-a730-2f343d7472fa-kube-api-access-rnr8v\") on node \"addons-819501\" DevicePath \"\""
	Dec 10 05:41:19 addons-819501 kubelet[2254]: I1210 05:41:19.330942    2254 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f7c4d6ff-87f7-45e9-a730-2f343d7472fa-data\") on node \"addons-819501\" DevicePath \"\""
	Dec 10 05:41:19 addons-819501 kubelet[2254]: I1210 05:41:19.330952    2254 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f7c4d6ff-87f7-45e9-a730-2f343d7472fa-script\") on node \"addons-819501\" DevicePath \"\""
	Dec 10 05:41:19 addons-819501 kubelet[2254]: I1210 05:41:19.850780    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-lkhvn" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:41:19 addons-819501 kubelet[2254]: E1210 05:41:19.854217    2254 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-lkhvn" podUID="0a8387d7-19c7-49cd-8425-48c60f2e70ae"
	Dec 10 05:41:19 addons-819501 kubelet[2254]: I1210 05:41:19.855568    2254 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c4d6ff-87f7-45e9-a730-2f343d7472fa" path="/var/lib/kubelet/pods/f7c4d6ff-87f7-45e9-a730-2f343d7472fa/volumes"
	Dec 10 05:41:20 addons-819501 kubelet[2254]: I1210 05:41:20.850798    2254 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-lwtl7" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:41:22 addons-819501 kubelet[2254]: E1210 05:41:22.295569    2254 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765345282294687901  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	Dec 10 05:41:22 addons-819501 kubelet[2254]: E1210 05:41:22.295624    2254 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765345282294687901  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:457883}  inodes_used:{value:162}}"
	
	
	==> storage-provisioner [cb5e7c29a0f38399bf5960063515490e581ab54b0530f811faa127ab9bde6320] <==
	W1210 05:41:00.620372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:02.624699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:02.633654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:04.637852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:04.644106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:06.649535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:06.656758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:08.661360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:08.666790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:10.671740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:10.680251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:12.683889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:12.691841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:14.695690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:14.704184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:16.709134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:16.715604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:18.725000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:18.730822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:20.734723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:20.739841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:22.745798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:22.755066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:24.761211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:41:24.768216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-819501 -n addons-819501
helpers_test.go:270: (dbg) Run:  kubectl --context addons-819501 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-t67b9 test-local-path registry-6b586f9694-lkhvn
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-819501 describe pod hello-world-app-5d498dc89-t67b9 test-local-path registry-6b586f9694-lkhvn
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-819501 describe pod hello-world-app-5d498dc89-t67b9 test-local-path registry-6b586f9694-lkhvn: exit status 1 (92.292039ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-t67b9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-819501/192.168.50.227
	Start Time:       Wed, 10 Dec 2025 05:39:32 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:           10.244.0.31
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bbf7r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bbf7r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  114s                default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-t67b9 to addons-819501
	  Warning  Failed     39s                 kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     39s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    39s                 kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     39s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    24s (x2 over 113s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v4jq9 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-v4jq9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-6b586f9694-lkhvn" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-819501 describe pod hello-world-app-5d498dc89-t67b9 test-local-path registry-6b586f9694-lkhvn: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/LocalPath (302.65s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-399479 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-399479 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-399479 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-399479 --alsologtostderr -v=1] stderr:
I1210 05:49:56.184393  258889 out.go:360] Setting OutFile to fd 1 ...
I1210 05:49:56.184511  258889 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:49:56.184520  258889 out.go:374] Setting ErrFile to fd 2...
I1210 05:49:56.184525  258889 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:49:56.184752  258889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
I1210 05:49:56.185024  258889 mustload.go:66] Loading cluster: functional-399479
I1210 05:49:56.185397  258889 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:49:56.187336  258889 host.go:66] Checking if "functional-399479" exists ...
I1210 05:49:56.187556  258889 api_server.go:166] Checking apiserver status ...
I1210 05:49:56.187603  258889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 05:49:56.190256  258889 main.go:143] libmachine: domain functional-399479 has defined MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:49:56.190728  258889 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:4b:15", ip: ""} in network mk-functional-399479: {Iface:virbr2 ExpiryTime:2025-12-10 06:46:09 +0000 UTC Type:0 Mac:52:54:00:6b:4b:15 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:functional-399479 Clientid:01:52:54:00:6b:4b:15}
I1210 05:49:56.190763  258889 main.go:143] libmachine: domain functional-399479 has defined IP address 192.168.50.97 and MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:49:56.190930  258889 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399479/id_rsa Username:docker}
I1210 05:49:56.292354  258889 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/8171/cgroup
W1210 05:49:56.305501  258889 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/8171/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1210 05:49:56.305600  258889 ssh_runner.go:195] Run: ls
I1210 05:49:56.312072  258889 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8441/healthz ...
I1210 05:49:56.318551  258889 api_server.go:279] https://192.168.50.97:8441/healthz returned 200:
ok
W1210 05:49:56.318616  258889 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1210 05:49:56.318780  258889 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:49:56.318808  258889 addons.go:70] Setting dashboard=true in profile "functional-399479"
I1210 05:49:56.318818  258889 addons.go:239] Setting addon dashboard=true in "functional-399479"
I1210 05:49:56.318850  258889 host.go:66] Checking if "functional-399479" exists ...
I1210 05:49:56.322722  258889 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1210 05:49:56.324048  258889 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1210 05:49:56.325429  258889 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1210 05:49:56.325454  258889 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1210 05:49:56.327984  258889 main.go:143] libmachine: domain functional-399479 has defined MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:49:56.328447  258889 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:4b:15", ip: ""} in network mk-functional-399479: {Iface:virbr2 ExpiryTime:2025-12-10 06:46:09 +0000 UTC Type:0 Mac:52:54:00:6b:4b:15 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:functional-399479 Clientid:01:52:54:00:6b:4b:15}
I1210 05:49:56.328476  258889 main.go:143] libmachine: domain functional-399479 has defined IP address 192.168.50.97 and MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:49:56.328633  258889 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399479/id_rsa Username:docker}
I1210 05:49:56.431929  258889 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1210 05:49:56.431964  258889 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1210 05:49:56.456369  258889 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1210 05:49:56.456404  258889 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1210 05:49:56.482244  258889 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1210 05:49:56.482272  258889 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1210 05:49:56.505990  258889 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1210 05:49:56.506014  258889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1210 05:49:56.528701  258889 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1210 05:49:56.528731  258889 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1210 05:49:56.552406  258889 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1210 05:49:56.552436  258889 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1210 05:49:56.575288  258889 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1210 05:49:56.575321  258889 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1210 05:49:56.602265  258889 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1210 05:49:56.602299  258889 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1210 05:49:56.627100  258889 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1210 05:49:56.627137  258889 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1210 05:49:56.654101  258889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1210 05:49:57.440552  258889 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-399479 addons enable metrics-server

                                                
                                                
I1210 05:49:57.442148  258889 addons.go:202] Writing out "functional-399479" config to set dashboard=true...
W1210 05:49:57.442511  258889 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1210 05:49:57.443498  258889 kapi.go:59] client config for functional-399479: &rest.Config{Host:"https://192.168.50.97:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.key", CAFile:"/home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1210 05:49:57.444172  258889 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1210 05:49:57.444204  258889 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1210 05:49:57.444215  258889 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1210 05:49:57.444221  258889 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1210 05:49:57.444228  258889 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1210 05:49:57.455989  258889 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  06f4b1f8-1c52-45ed-bb87-fd737cacdf59 887 0 2025-12-10 05:49:57 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-10 05:49:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.102.61.116,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.102.61.116],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1210 05:49:57.456141  258889 out.go:285] * Launching proxy ...
* Launching proxy ...
I1210 05:49:57.456258  258889 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-399479 proxy --port 36195]
I1210 05:49:57.456742  258889 dashboard.go:159] Waiting for kubectl to output host:port ...
I1210 05:49:57.512195  258889 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1210 05:49:57.512263  258889 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1210 05:49:57.521401  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5be312e6-0267-4212-bafd-c85dd9a016f5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000802ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0780 TLS:<nil>}
I1210 05:49:57.521478  258889 retry.go:31] will retry after 120.462µs: Temporary Error: unexpected response code: 503
I1210 05:49:57.525806  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b1060fc5-c7c6-48de-93a0-863ebf3555c8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000795140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003dc8c0 TLS:<nil>}
I1210 05:49:57.525917  258889 retry.go:31] will retry after 211.12µs: Temporary Error: unexpected response code: 503
I1210 05:49:57.531840  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e80d72ed-0f17-47e1-8a99-6d15fe95ec46] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000802fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000510f00 TLS:<nil>}
I1210 05:49:57.531912  258889 retry.go:31] will retry after 254.844µs: Temporary Error: unexpected response code: 503
I1210 05:49:57.539601  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3137b514-a7cd-429e-bb93-19e103082bfd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000a06b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003dca00 TLS:<nil>}
I1210 05:49:57.539692  258889 retry.go:31] will retry after 295.142µs: Temporary Error: unexpected response code: 503
I1210 05:49:57.543781  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[babcb9ca-e543-4a62-938f-923ce28780ed] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc0008030c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0a00 TLS:<nil>}
I1210 05:49:57.543839  258889 retry.go:31] will retry after 417.62µs: Temporary Error: unexpected response code: 503
I1210 05:49:57.547222  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[43007de9-1104-41bb-a810-6c8d01b93e01] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000a06cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003dcb40 TLS:<nil>}
I1210 05:49:57.547295  258889 retry.go:31] will retry after 1.084279ms: Temporary Error: unexpected response code: 503
I1210 05:49:57.550815  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dffec6bf-3e94-4ba8-903e-988a8bbdc91f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc0007952c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0b40 TLS:<nil>}
I1210 05:49:57.550863  258889 retry.go:31] will retry after 1.148462ms: Temporary Error: unexpected response code: 503
I1210 05:49:57.554696  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1235b8d0-d008-43d2-9a28-6c7f31694be9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000803180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000511040 TLS:<nil>}
I1210 05:49:57.554765  258889 retry.go:31] will retry after 1.069138ms: Temporary Error: unexpected response code: 503
I1210 05:49:57.558367  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fb93a9ba-0308-4dd7-8dcd-b60b1facd5f1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000795400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003dcc80 TLS:<nil>}
I1210 05:49:57.558420  258889 retry.go:31] will retry after 1.310577ms: Temporary Error: unexpected response code: 503
I1210 05:49:57.563035  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5ddf9b93-51b1-4706-9cdf-37c638e5f003] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000803280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000511180 TLS:<nil>}
I1210 05:49:57.563086  258889 retry.go:31] will retry after 5.097759ms: Temporary Error: unexpected response code: 503
I1210 05:49:57.572996  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[10f1db3a-e16c-446e-a6a4-7852d6784f06] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000a06ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003dcdc0 TLS:<nil>}
I1210 05:49:57.573079  258889 retry.go:31] will retry after 7.04246ms: Temporary Error: unexpected response code: 503
I1210 05:49:57.583639  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[db84dd32-be7a-4a31-a506-68620f4988e8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000803380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0dc0 TLS:<nil>}
I1210 05:49:57.583728  258889 retry.go:31] will retry after 6.710418ms: Temporary Error: unexpected response code: 503
I1210 05:49:57.594274  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[470a33e2-14cd-48a3-9663-7fbfe576763b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000795500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003dd180 TLS:<nil>}
I1210 05:49:57.594337  258889 retry.go:31] will retry after 7.821047ms: Temporary Error: unexpected response code: 503
I1210 05:49:57.607000  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3cd5c600-64f1-43f3-999e-f41d0affeac9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc0007955c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005112c0 TLS:<nil>}
I1210 05:49:57.607078  258889 retry.go:31] will retry after 28.409475ms: Temporary Error: unexpected response code: 503
I1210 05:49:57.642762  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[897f92b8-51cf-438d-858e-c5e41b6731bd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000803480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000511400 TLS:<nil>}
I1210 05:49:57.642858  258889 retry.go:31] will retry after 34.19021ms: Temporary Error: unexpected response code: 503
I1210 05:49:57.685588  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[95224f7f-9f07-411f-968a-28d6130a8f4a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc0007956c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003dd2c0 TLS:<nil>}
I1210 05:49:57.685685  258889 retry.go:31] will retry after 32.380645ms: Temporary Error: unexpected response code: 503
I1210 05:49:57.724711  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f9ba35d9-9c8a-4d4d-8f23-ff7ecb64f78a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000a07080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000511540 TLS:<nil>}
I1210 05:49:57.724795  258889 retry.go:31] will retry after 97.676416ms: Temporary Error: unexpected response code: 503
I1210 05:49:57.832545  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eb5c5ea5-bec9-4597-8541-98cbddeb77f4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000795840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0f00 TLS:<nil>}
I1210 05:49:57.832621  258889 retry.go:31] will retry after 54.764725ms: Temporary Error: unexpected response code: 503
I1210 05:49:57.896216  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[93364388-7ed8-4ee1-b3bf-00cb5d635a80] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:57 GMT]] Body:0xc000795940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000511680 TLS:<nil>}
I1210 05:49:57.896291  258889 retry.go:31] will retry after 217.095763ms: Temporary Error: unexpected response code: 503
I1210 05:49:58.117084  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e83d5caf-8c68-4eb8-923e-a1710126785a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:58 GMT]] Body:0xc000795a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005117c0 TLS:<nil>}
I1210 05:49:58.117169  258889 retry.go:31] will retry after 112.082019ms: Temporary Error: unexpected response code: 503
I1210 05:49:58.233253  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[56795c6f-c952-40ef-a500-142682ab64f7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:58 GMT]] Body:0xc0008035c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000511900 TLS:<nil>}
I1210 05:49:58.233327  258889 retry.go:31] will retry after 414.317461ms: Temporary Error: unexpected response code: 503
I1210 05:49:58.651236  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[07e01a0e-f246-4c0a-acf2-fd6036a3870c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:58 GMT]] Body:0xc000a07280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003dd400 TLS:<nil>}
I1210 05:49:58.651320  258889 retry.go:31] will retry after 517.9131ms: Temporary Error: unexpected response code: 503
I1210 05:49:59.173545  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d8f22789-72ed-467e-9a7d-db5d06c4afec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:59 GMT]] Body:0xc0008036c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1040 TLS:<nil>}
I1210 05:49:59.173616  258889 retry.go:31] will retry after 701.647749ms: Temporary Error: unexpected response code: 503
I1210 05:49:59.878844  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dbc41746-7888-4efd-9acd-bb88bb509ff4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:49:59 GMT]] Body:0xc000a07680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003dd540 TLS:<nil>}
I1210 05:49:59.878941  258889 retry.go:31] will retry after 1.566272065s: Temporary Error: unexpected response code: 503
I1210 05:50:01.449838  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ee7e2815-c04e-4652-96e0-eb2b30d21e9e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:50:01 GMT]] Body:0xc000795b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c12c0 TLS:<nil>}
I1210 05:50:01.449935  258889 retry.go:31] will retry after 871.347886ms: Temporary Error: unexpected response code: 503
I1210 05:50:02.325653  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[435d7678-e716-4ce1-89bc-5f7264e405e3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:50:02 GMT]] Body:0xc000a077c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000511a40 TLS:<nil>}
I1210 05:50:02.325723  258889 retry.go:31] will retry after 3.627110868s: Temporary Error: unexpected response code: 503
I1210 05:50:05.959776  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[283a9523-5ad4-4ec3-a8ba-273b054e69dc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:50:05 GMT]] Body:0xc0008037c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1400 TLS:<nil>}
I1210 05:50:05.959855  258889 retry.go:31] will retry after 4.943849863s: Temporary Error: unexpected response code: 503
I1210 05:50:10.910330  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[600d5ae3-c339-4445-8812-db65403a2964] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:50:10 GMT]] Body:0xc000795c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1540 TLS:<nil>}
I1210 05:50:10.910420  258889 retry.go:31] will retry after 4.419547551s: Temporary Error: unexpected response code: 503
I1210 05:50:15.337955  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1efb9d41-9718-4d14-89d6-4f0f67db6a49] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:50:15 GMT]] Body:0xc000803840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000511b80 TLS:<nil>}
I1210 05:50:15.338042  258889 retry.go:31] will retry after 5.873965965s: Temporary Error: unexpected response code: 503
I1210 05:50:21.217168  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1e954a5b-8d4b-41c9-9189-7df7bba52e4e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:50:21 GMT]] Body:0xc0008038c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000511cc0 TLS:<nil>}
I1210 05:50:21.217273  258889 retry.go:31] will retry after 15.189280315s: Temporary Error: unexpected response code: 503
I1210 05:50:36.411292  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a84f8e2a-3ce6-4012-b7f3-806ceb3cb0f0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:50:36 GMT]] Body:0xc000a07f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003dd680 TLS:<nil>}
I1210 05:50:36.411381  258889 retry.go:31] will retry after 13.232321648s: Temporary Error: unexpected response code: 503
I1210 05:50:49.648611  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d2d57ff9-28e3-4d44-ac41-28698bdda16a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:50:49 GMT]] Body:0xc000918940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1680 TLS:<nil>}
I1210 05:50:49.648684  258889 retry.go:31] will retry after 26.70658989s: Temporary Error: unexpected response code: 503
I1210 05:51:16.359823  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[06fd296e-1133-45d5-9d74-42bec946f66b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:51:16 GMT]] Body:0xc000803a40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c17c0 TLS:<nil>}
I1210 05:51:16.359935  258889 retry.go:31] will retry after 31.658761902s: Temporary Error: unexpected response code: 503
I1210 05:51:48.024155  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d142ec04-719a-4a7d-a7cf-7d7db7625b51] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:51:48 GMT]] Body:0xc000795d40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003dd7c0 TLS:<nil>}
I1210 05:51:48.024248  258889 retry.go:31] will retry after 56.870392307s: Temporary Error: unexpected response code: 503
I1210 05:52:44.901706  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2aa52c07-4fef-4cc3-b2bc-627b43036aa2] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:52:44 GMT]] Body:0xc000802480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1900 TLS:<nil>}
I1210 05:52:44.901795  258889 retry.go:31] will retry after 1m1.926611855s: Temporary Error: unexpected response code: 503
I1210 05:53:46.833592  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[abc50b2a-5035-49ac-80a6-350e95446aa4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:53:46 GMT]] Body:0xc000918940 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003dc280 TLS:<nil>}
I1210 05:53:46.833690  258889 retry.go:31] will retry after 47.894178369s: Temporary Error: unexpected response code: 503
I1210 05:54:34.735133  258889 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[189a82eb-7eaf-4b4e-b208-4a908cbff0ff] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 10 Dec 2025 05:54:34 GMT]] Body:0xc0008d2100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1a40 TLS:<nil>}
I1210 05:54:34.735260  258889 retry.go:31] will retry after 1m2.998635576s: Temporary Error: unexpected response code: 503
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-399479 -n functional-399479
helpers_test.go:253: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-399479 logs -n 25: (1.44238998s)
helpers_test.go:261: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-399479 ssh sudo umount -f /mount-9p                                                                                    │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ ssh            │ functional-399479 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ mount          │ -p functional-399479 /tmp/TestFunctionalparallelMountCmdspecific-port3496322953/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ ssh            │ functional-399479 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ ssh            │ functional-399479 ssh -- ls -la /mount-9p                                                                                         │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ ssh            │ functional-399479 ssh sudo umount -f /mount-9p                                                                                    │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ mount          │ -p functional-399479 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3391979705/001:/mount1 --alsologtostderr -v=1                │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ mount          │ -p functional-399479 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3391979705/001:/mount2 --alsologtostderr -v=1                │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ mount          │ -p functional-399479 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3391979705/001:/mount3 --alsologtostderr -v=1                │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ ssh            │ functional-399479 ssh findmnt -T /mount1                                                                                          │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ ssh            │ functional-399479 ssh findmnt -T /mount1                                                                                          │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ ssh            │ functional-399479 ssh findmnt -T /mount2                                                                                          │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ ssh            │ functional-399479 ssh findmnt -T /mount3                                                                                          │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ mount          │ -p functional-399479 --kill=true                                                                                                  │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ license        │                                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ update-context │ functional-399479 update-context --alsologtostderr -v=2                                                                           │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ update-context │ functional-399479 update-context --alsologtostderr -v=2                                                                           │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ update-context │ functional-399479 update-context --alsologtostderr -v=2                                                                           │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ image          │ functional-399479 image ls --format short --alsologtostderr                                                                       │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ image          │ functional-399479 image ls --format yaml --alsologtostderr                                                                        │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ ssh            │ functional-399479 ssh pgrep buildkitd                                                                                             │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ image          │ functional-399479 image build -t localhost/my-image:functional-399479 testdata/build --alsologtostderr                            │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ image          │ functional-399479 image ls                                                                                                        │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ image          │ functional-399479 image ls --format json --alsologtostderr                                                                        │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ image          │ functional-399479 image ls --format table --alsologtostderr                                                                       │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:49:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:49:56.071243  258873 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:49:56.071361  258873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:49:56.071370  258873 out.go:374] Setting ErrFile to fd 2...
	I1210 05:49:56.071374  258873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:49:56.071566  258873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 05:49:56.072056  258873 out.go:368] Setting JSON to false
	I1210 05:49:56.072939  258873 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27143,"bootTime":1765318653,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:49:56.073011  258873 start.go:143] virtualization: kvm guest
	I1210 05:49:56.074908  258873 out.go:179] * [functional-399479] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:49:56.076720  258873 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:49:56.076741  258873 notify.go:221] Checking for updates...
	I1210 05:49:56.079558  258873 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:49:56.080821  258873 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 05:49:56.082034  258873 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:49:56.083333  258873 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:49:56.084568  258873 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:49:56.086239  258873 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:49:56.086702  258873 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:49:56.118830  258873 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 05:49:56.120383  258873 start.go:309] selected driver: kvm2
	I1210 05:49:56.120398  258873 start.go:927] validating driver "kvm2" against &{Name:functional-399479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-399479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:49:56.120509  258873 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:49:56.121479  258873 cni.go:84] Creating CNI manager for ""
	I1210 05:49:56.121539  258873 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:49:56.121590  258873 start.go:353] cluster config:
	{Name:functional-399479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-399479 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:49:56.123027  258873 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 10 05:54:56 functional-399479 crio[7526]: time="2025-12-10 05:54:56.936922189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765346096936852562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:210362,},InodesUsed:&UInt64Value{Value:99,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=318adc49-2a2f-4419-ba7d-7c8ae8c3fe9d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:54:56 functional-399479 crio[7526]: time="2025-12-10 05:54:56.937943593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f3edad2-6f1d-4bcd-a3a1-8a4fd4817a59 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:54:56 functional-399479 crio[7526]: time="2025-12-10 05:54:56.938020801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f3edad2-6f1d-4bcd-a3a1-8a4fd4817a59 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:54:56 functional-399479 crio[7526]: time="2025-12-10 05:54:56.938410399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d112008e1180795d62e2a177aeb1d75655a06b758ffe36b7a40bfff61065aa1f,PodSandboxId:f293640667c22b5c7dbb9ae4c492fbe5e8be955ff34b269efd8939bb1fd14683,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765345814149012042,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 927b8082-ea8c-4e49-a503-c6c8b3ff138b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7538813a3fbfdb4d0580b3108ad7916ad08039aad2fbbdd5d4d540422fd424,PodSandboxId:204842a4f6efbd2cae7c4dddbd5d34f6a3d729df20798547c945154b74f49c4b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345787552626404,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a50b332a-ac62-404c-99a2-9880df367d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed485bcdae2f9dd4422271ba8dfbe64b34770d7bc632fcaca2ca015fa673b22a,PodSandboxId:deecdbdd3e1a5fed16a273d9bb38bb62ce6387ba32ea5eec08cd4448677fb320,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765345776223219145,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-vl4tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 234c01a1-0983-4d96-98a7-4e5dc0f914d3,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f1fd2b453be6ec0b757ecf35e456e0719cf02e36e04437ac365672e5d98495,PodSandboxId:fad8e6374bbaa29f287132015a9d37d485201a7a586b8cf824dfdf06136affc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765345739022574322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5191516b43ed9a9aac9e9e40e0db020d98c0c6ac92fe13c623d7cfc4168ea7,PodSandboxId:093979a6e1eb4711ec30269600e9324c75fb7da3db9f6059b98407c720d66988,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765345738555287129,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cb8c02f153f2fc995c27d8026ef4e83db634bf200bbbb7a9d1b281259506ea,PodSandboxId:569b9f50a4e64685a84fbac953062ccd6c7201fc75bbd16cb8d284947ef61444,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765345
737630367102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c128b1a04cbdfd6e43bfd4e0974d8e739cdae1f5e405ba6642aa6c96c4ef112,PodSandboxId:b644ef5636424c4c38407220fd047493dff42936e7c6c8d196f0a81bfdee8fa9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765345733337419453,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f005b76f1ce30a1a58ca40282691ebbd6747c4db863f674caf025df264750ef,PodSandboxId:ac7d56ebd4c712bc823cfcb2be25e8f4743323052b7631d670851cfb54a5379e,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e28500
85ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765345733283754079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099afaefdab639c10f0fea0229d6f39df0bda640b494890a88b9fcf30f00098c,PodSandboxId:2c3d9225ccdeca59e0bd22c26d5aedf53dd49810ed93ffc105bace7af0ad0cf0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765345733277748251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13918951b31227ef1afd98d22dc8f498,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d50c80fc084dae7743a652ba587683cd8ccb59fea82e2295b1ba3fc1eb95a6,PodSandboxId:3e7ec3844fea7e596dea92a1a88e9cef6f8c99ea19e3145b20ba0fb6a52e550d,Metadata:&ContainerMetadata{Name:kube-controller-manag
er,Attempt:4,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765345733231581685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eff65a81559b983be9329bb1512bed25d3c581ad1f00ee22a500646b1a
5ea3,PodSandboxId:1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765345691697716547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f7134ab506877facbe1fdbaf19d67b9941db290a03df6595e30df928f8b2ef,PodSandboxId:c75ec0e67c43c2c0
96ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765345691696166720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b79b9c434edade68566f44e7dadb72fa7d05782ecba5d32113b8602a8c61ca,PodSandboxId:3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765345691692748511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184dd25c9539e8cb73ef1749892f9843c3a65f879d24a33add3538060f79e711,PodSandboxId:66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765345688093212273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9a073798f95464023ab2f00c3c4f6673e6f656c3d1d416620e93a0f1c627ff,PodSandboxId:1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765345688083167069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d528a051cb407db10faa2b6eab2550403b093bbdbfcfa526543a2d790f10d2b7,PodSandboxId:a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765345688061118034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f3edad2-6f1d-4bcd-a3a1-8a4fd4817a59 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:54:56 functional-399479 crio[7526]: time="2025-12-10 05:54:56.979340751Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5fd3c453-5c12-475e-b57a-39bc12c53dbd name=/runtime.v1.RuntimeService/Version
	Dec 10 05:54:56 functional-399479 crio[7526]: time="2025-12-10 05:54:56.979433906Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fd3c453-5c12-475e-b57a-39bc12c53dbd name=/runtime.v1.RuntimeService/Version
	Dec 10 05:54:56 functional-399479 crio[7526]: time="2025-12-10 05:54:56.982111349Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=efd37f64-e022-4b7f-a624-b78fa0c8e3da name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:54:56 functional-399479 crio[7526]: time="2025-12-10 05:54:56.983445974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765346096983419012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:210362,},InodesUsed:&UInt64Value{Value:99,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efd37f64-e022-4b7f-a624-b78fa0c8e3da name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:54:56 functional-399479 crio[7526]: time="2025-12-10 05:54:56.984621998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5956832c-63c4-4798-ad11-db056896c709 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:54:56 functional-399479 crio[7526]: time="2025-12-10 05:54:56.984695873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5956832c-63c4-4798-ad11-db056896c709 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:54:56 functional-399479 crio[7526]: time="2025-12-10 05:54:56.985152205Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d112008e1180795d62e2a177aeb1d75655a06b758ffe36b7a40bfff61065aa1f,PodSandboxId:f293640667c22b5c7dbb9ae4c492fbe5e8be955ff34b269efd8939bb1fd14683,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765345814149012042,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 927b8082-ea8c-4e49-a503-c6c8b3ff138b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7538813a3fbfdb4d0580b3108ad7916ad08039aad2fbbdd5d4d540422fd424,PodSandboxId:204842a4f6efbd2cae7c4dddbd5d34f6a3d729df20798547c945154b74f49c4b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345787552626404,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a50b332a-ac62-404c-99a2-9880df367d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed485bcdae2f9dd4422271ba8dfbe64b34770d7bc632fcaca2ca015fa673b22a,PodSandboxId:deecdbdd3e1a5fed16a273d9bb38bb62ce6387ba32ea5eec08cd4448677fb320,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765345776223219145,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-vl4tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 234c01a1-0983-4d96-98a7-4e5dc0f914d3,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f1fd2b453be6ec0b757ecf35e456e0719cf02e36e04437ac365672e5d98495,PodSandboxId:fad8e6374bbaa29f287132015a9d37d485201a7a586b8cf824dfdf06136affc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765345739022574322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5191516b43ed9a9aac9e9e40e0db020d98c0c6ac92fe13c623d7cfc4168ea7,PodSandboxId:093979a6e1eb4711ec30269600e9324c75fb7da3db9f6059b98407c720d66988,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765345738555287129,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cb8c02f153f2fc995c27d8026ef4e83db634bf200bbbb7a9d1b281259506ea,PodSandboxId:569b9f50a4e64685a84fbac953062ccd6c7201fc75bbd16cb8d284947ef61444,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765345
737630367102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c128b1a04cbdfd6e43bfd4e0974d8e739cdae1f5e405ba6642aa6c96c4ef112,PodSandboxId:b644ef5636424c4c38407220fd047493dff42936e7c6c8d196f0a81bfdee8fa9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765345733337419453,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f005b76f1ce30a1a58ca40282691ebbd6747c4db863f674caf025df264750ef,PodSandboxId:ac7d56ebd4c712bc823cfcb2be25e8f4743323052b7631d670851cfb54a5379e,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e28500
85ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765345733283754079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099afaefdab639c10f0fea0229d6f39df0bda640b494890a88b9fcf30f00098c,PodSandboxId:2c3d9225ccdeca59e0bd22c26d5aedf53dd49810ed93ffc105bace7af0ad0cf0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765345733277748251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13918951b31227ef1afd98d22dc8f498,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d50c80fc084dae7743a652ba587683cd8ccb59fea82e2295b1ba3fc1eb95a6,PodSandboxId:3e7ec3844fea7e596dea92a1a88e9cef6f8c99ea19e3145b20ba0fb6a52e550d,Metadata:&ContainerMetadata{Name:kube-controller-manag
er,Attempt:4,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765345733231581685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eff65a81559b983be9329bb1512bed25d3c581ad1f00ee22a500646b1a
5ea3,PodSandboxId:1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765345691697716547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f7134ab506877facbe1fdbaf19d67b9941db290a03df6595e30df928f8b2ef,PodSandboxId:c75ec0e67c43c2c0
96ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765345691696166720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b79b9c434edade68566f44e7dadb72fa7d05782ecba5d32113b8602a8c61ca,PodSandboxId:3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765345691692748511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184dd25c9539e8cb73ef1749892f9843c3a65f879d24a33add3538060f79e711,PodSandboxId:66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765345688093212273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9a073798f95464023ab2f00c3c4f6673e6f656c3d1d416620e93a0f1c627ff,PodSandboxId:1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765345688083167069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d528a051cb407db10faa2b6eab2550403b093bbdbfcfa526543a2d790f10d2b7,PodSandboxId:a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765345688061118034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5956832c-63c4-4798-ad11-db056896c709 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.017408364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a5b523b-a6fe-44c3-9044-85025281d7a9 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.017573856Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a5b523b-a6fe-44c3-9044-85025281d7a9 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.019248318Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe6f6ee7-accd-4701-b7ab-e7020d462669 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.019954726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765346097019852179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:210362,},InodesUsed:&UInt64Value{Value:99,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe6f6ee7-accd-4701-b7ab-e7020d462669 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.021370337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e8f7787-6426-4096-b04b-b5b83237f64f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.021429922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e8f7787-6426-4096-b04b-b5b83237f64f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.021756784Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d112008e1180795d62e2a177aeb1d75655a06b758ffe36b7a40bfff61065aa1f,PodSandboxId:f293640667c22b5c7dbb9ae4c492fbe5e8be955ff34b269efd8939bb1fd14683,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765345814149012042,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 927b8082-ea8c-4e49-a503-c6c8b3ff138b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7538813a3fbfdb4d0580b3108ad7916ad08039aad2fbbdd5d4d540422fd424,PodSandboxId:204842a4f6efbd2cae7c4dddbd5d34f6a3d729df20798547c945154b74f49c4b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345787552626404,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a50b332a-ac62-404c-99a2-9880df367d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed485bcdae2f9dd4422271ba8dfbe64b34770d7bc632fcaca2ca015fa673b22a,PodSandboxId:deecdbdd3e1a5fed16a273d9bb38bb62ce6387ba32ea5eec08cd4448677fb320,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765345776223219145,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-vl4tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 234c01a1-0983-4d96-98a7-4e5dc0f914d3,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f1fd2b453be6ec0b757ecf35e456e0719cf02e36e04437ac365672e5d98495,PodSandboxId:fad8e6374bbaa29f287132015a9d37d485201a7a586b8cf824dfdf06136affc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765345739022574322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5191516b43ed9a9aac9e9e40e0db020d98c0c6ac92fe13c623d7cfc4168ea7,PodSandboxId:093979a6e1eb4711ec30269600e9324c75fb7da3db9f6059b98407c720d66988,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765345738555287129,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cb8c02f153f2fc995c27d8026ef4e83db634bf200bbbb7a9d1b281259506ea,PodSandboxId:569b9f50a4e64685a84fbac953062ccd6c7201fc75bbd16cb8d284947ef61444,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765345
737630367102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c128b1a04cbdfd6e43bfd4e0974d8e739cdae1f5e405ba6642aa6c96c4ef112,PodSandboxId:b644ef5636424c4c38407220fd047493dff42936e7c6c8d196f0a81bfdee8fa9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765345733337419453,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f005b76f1ce30a1a58ca40282691ebbd6747c4db863f674caf025df264750ef,PodSandboxId:ac7d56ebd4c712bc823cfcb2be25e8f4743323052b7631d670851cfb54a5379e,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e28500
85ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765345733283754079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099afaefdab639c10f0fea0229d6f39df0bda640b494890a88b9fcf30f00098c,PodSandboxId:2c3d9225ccdeca59e0bd22c26d5aedf53dd49810ed93ffc105bace7af0ad0cf0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765345733277748251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13918951b31227ef1afd98d22dc8f498,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d50c80fc084dae7743a652ba587683cd8ccb59fea82e2295b1ba3fc1eb95a6,PodSandboxId:3e7ec3844fea7e596dea92a1a88e9cef6f8c99ea19e3145b20ba0fb6a52e550d,Metadata:&ContainerMetadata{Name:kube-controller-manag
er,Attempt:4,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765345733231581685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eff65a81559b983be9329bb1512bed25d3c581ad1f00ee22a500646b1a
5ea3,PodSandboxId:1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765345691697716547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f7134ab506877facbe1fdbaf19d67b9941db290a03df6595e30df928f8b2ef,PodSandboxId:c75ec0e67c43c2c0
96ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765345691696166720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b79b9c434edade68566f44e7dadb72fa7d05782ecba5d32113b8602a8c61ca,PodSandboxId:3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765345691692748511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184dd25c9539e8cb73ef1749892f9843c3a65f879d24a33add3538060f79e711,PodSandboxId:66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765345688093212273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9a073798f95464023ab2f00c3c4f6673e6f656c3d1d416620e93a0f1c627ff,PodSandboxId:1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765345688083167069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d528a051cb407db10faa2b6eab2550403b093bbdbfcfa526543a2d790f10d2b7,PodSandboxId:a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765345688061118034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e8f7787-6426-4096-b04b-b5b83237f64f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.054812753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fdca1590-ed6d-4082-9d3a-6724e1ba5ffa name=/runtime.v1.RuntimeService/Version
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.054952568Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fdca1590-ed6d-4082-9d3a-6724e1ba5ffa name=/runtime.v1.RuntimeService/Version
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.056429210Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4928ce3f-1029-46ee-9a0c-57e3d620d35e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.057166884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765346097057139143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:210362,},InodesUsed:&UInt64Value{Value:99,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4928ce3f-1029-46ee-9a0c-57e3d620d35e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.058335584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0e30a96-d437-40bf-9ca4-19cbebcc3f81 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.058397712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0e30a96-d437-40bf-9ca4-19cbebcc3f81 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:54:57 functional-399479 crio[7526]: time="2025-12-10 05:54:57.058723539Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d112008e1180795d62e2a177aeb1d75655a06b758ffe36b7a40bfff61065aa1f,PodSandboxId:f293640667c22b5c7dbb9ae4c492fbe5e8be955ff34b269efd8939bb1fd14683,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765345814149012042,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 927b8082-ea8c-4e49-a503-c6c8b3ff138b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7538813a3fbfdb4d0580b3108ad7916ad08039aad2fbbdd5d4d540422fd424,PodSandboxId:204842a4f6efbd2cae7c4dddbd5d34f6a3d729df20798547c945154b74f49c4b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345787552626404,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a50b332a-ac62-404c-99a2-9880df367d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed485bcdae2f9dd4422271ba8dfbe64b34770d7bc632fcaca2ca015fa673b22a,PodSandboxId:deecdbdd3e1a5fed16a273d9bb38bb62ce6387ba32ea5eec08cd4448677fb320,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765345776223219145,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-vl4tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 234c01a1-0983-4d96-98a7-4e5dc0f914d3,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f1fd2b453be6ec0b757ecf35e456e0719cf02e36e04437ac365672e5d98495,PodSandboxId:fad8e6374bbaa29f287132015a9d37d485201a7a586b8cf824dfdf06136affc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765345739022574322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5191516b43ed9a9aac9e9e40e0db020d98c0c6ac92fe13c623d7cfc4168ea7,PodSandboxId:093979a6e1eb4711ec30269600e9324c75fb7da3db9f6059b98407c720d66988,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765345738555287129,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cb8c02f153f2fc995c27d8026ef4e83db634bf200bbbb7a9d1b281259506ea,PodSandboxId:569b9f50a4e64685a84fbac953062ccd6c7201fc75bbd16cb8d284947ef61444,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765345
737630367102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c128b1a04cbdfd6e43bfd4e0974d8e739cdae1f5e405ba6642aa6c96c4ef112,PodSandboxId:b644ef5636424c4c38407220fd047493dff42936e7c6c8d196f0a81bfdee8fa9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765345733337419453,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f005b76f1ce30a1a58ca40282691ebbd6747c4db863f674caf025df264750ef,PodSandboxId:ac7d56ebd4c712bc823cfcb2be25e8f4743323052b7631d670851cfb54a5379e,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e28500
85ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765345733283754079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099afaefdab639c10f0fea0229d6f39df0bda640b494890a88b9fcf30f00098c,PodSandboxId:2c3d9225ccdeca59e0bd22c26d5aedf53dd49810ed93ffc105bace7af0ad0cf0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765345733277748251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13918951b31227ef1afd98d22dc8f498,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d50c80fc084dae7743a652ba587683cd8ccb59fea82e2295b1ba3fc1eb95a6,PodSandboxId:3e7ec3844fea7e596dea92a1a88e9cef6f8c99ea19e3145b20ba0fb6a52e550d,Metadata:&ContainerMetadata{Name:kube-controller-manag
er,Attempt:4,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765345733231581685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eff65a81559b983be9329bb1512bed25d3c581ad1f00ee22a500646b1a
5ea3,PodSandboxId:1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765345691697716547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f7134ab506877facbe1fdbaf19d67b9941db290a03df6595e30df928f8b2ef,PodSandboxId:c75ec0e67c43c2c0
96ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765345691696166720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b79b9c434edade68566f44e7dadb72fa7d05782ecba5d32113b8602a8c61ca,PodSandboxId:3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765345691692748511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184dd25c9539e8cb73ef1749892f9843c3a65f879d24a33add3538060f79e711,PodSandboxId:66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765345688093212273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9a073798f95464023ab2f00c3c4f6673e6f656c3d1d416620e93a0f1c627ff,PodSandboxId:1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765345688083167069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d528a051cb407db10faa2b6eab2550403b093bbdbfcfa526543a2d790f10d2b7,PodSandboxId:a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765345688061118034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0e30a96-d437-40bf-9ca4-19cbebcc3f81 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d112008e11807       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           4 minutes ago       Exited              mount-munger              0                   f293640667c22       busybox-mount                               default
	6a7538813a3fb       d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9                                              5 minutes ago       Running             myfrontend                0                   204842a4f6efb       sp-pod                                      default
	ed485bcdae2f9       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   5 minutes ago       Running             mysql                     0                   deecdbdd3e1a5       mysql-6bcdcbc558-vl4tc                      default
	67f1fd2b453be       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              5 minutes ago       Running             coredns                   3                   fad8e6374bbaa       coredns-66bc5c9577-jcttb                    kube-system
	8c5191516b43e       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                              5 minutes ago       Running             kube-proxy                4                   093979a6e1eb4       kube-proxy-zf8c6                            kube-system
	96cb8c02f153f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              5 minutes ago       Running             storage-provisioner       4                   569b9f50a4e64       storage-provisioner                         kube-system
	5c128b1a04cbd       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                              6 minutes ago       Running             kube-scheduler            4                   b644ef5636424       kube-scheduler-functional-399479            kube-system
	8f005b76f1ce3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              6 minutes ago       Running             etcd                      4                   ac7d56ebd4c71       etcd-functional-399479                      kube-system
	099afaefdab63       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                              6 minutes ago       Running             kube-apiserver            0                   2c3d9225ccdec       kube-apiserver-functional-399479            kube-system
	a9d50c80fc084       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                              6 minutes ago       Running             kube-controller-manager   4                   3e7ec3844fea7       kube-controller-manager-functional-399479   kube-system
	a5eff65a81559       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                              6 minutes ago       Exited              kube-proxy                3                   1f793cb509fc3       kube-proxy-zf8c6                            kube-system
	75f7134ab5068       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              6 minutes ago       Exited              coredns                   2                   c75ec0e67c43c       coredns-66bc5c9577-jcttb                    kube-system
	21b79b9c434ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              6 minutes ago       Exited              storage-provisioner       3                   3e977af9c71c8       storage-provisioner                         kube-system
	184dd25c9539e       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                              6 minutes ago       Exited              kube-scheduler            3                   66c8ece830920       kube-scheduler-functional-399479            kube-system
	9d9a073798f95       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                              6 minutes ago       Exited              kube-controller-manager   3                   1bd011f76ba3a       kube-controller-manager-functional-399479   kube-system
	d528a051cb407       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              6 minutes ago       Exited              etcd                      3                   a683b4bcb97ac       etcd-functional-399479                      kube-system
	
	
	==> coredns [67f1fd2b453be6ec0b757ecf35e456e0719cf02e36e04437ac365672e5d98495] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58559 - 13908 "HINFO IN 1374157520811740941.4866616976204474115. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031478639s
	
	
	==> coredns [75f7134ab506877facbe1fdbaf19d67b9941db290a03df6595e30df928f8b2ef] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41404 - 64138 "HINFO IN 6223323306420373507.4348862341396717294. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024035947s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-399479
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-399479
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=functional-399479
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T05_46_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 05:46:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-399479
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 05:54:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 05:50:59 +0000   Wed, 10 Dec 2025 05:46:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 05:50:59 +0000   Wed, 10 Dec 2025 05:46:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 05:50:59 +0000   Wed, 10 Dec 2025 05:46:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 05:50:59 +0000   Wed, 10 Dec 2025 05:46:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.97
	  Hostname:    functional-399479
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 777af56003704c57b88635b0a708362b
	  System UUID:                777af560-0370-4c57-b886-35b0a708362b
	  Boot ID:                    0e262d6b-9ed5-414c-8537-7cf53b538b6b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-x55vv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  default                     hello-node-connect-7d85dfc575-zdq58           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  default                     mysql-6bcdcbc558-vl4tc                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m38s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 coredns-66bc5c9577-jcttb                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m9s
	  kube-system                 etcd-functional-399479                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m14s
	  kube-system                 kube-apiserver-functional-399479              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 kube-controller-manager-functional-399479     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-proxy-zf8c6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m9s
	  kube-system                 kube-scheduler-functional-399479              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m7s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-bw6lk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-d59w5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m7s                   kube-proxy       
	  Normal  Starting                 5m57s                  kube-proxy       
	  Normal  Starting                 6m45s                  kube-proxy       
	  Normal  Starting                 8m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m21s (x8 over 8m21s)  kubelet          Node functional-399479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m21s (x8 over 8m21s)  kubelet          Node functional-399479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m21s (x7 over 8m21s)  kubelet          Node functional-399479 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m14s                  kubelet          Node functional-399479 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m14s                  kubelet          Node functional-399479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m14s                  kubelet          Node functional-399479 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m14s                  kubelet          Starting kubelet.
	  Normal  NodeReady                8m13s                  kubelet          Node functional-399479 status is now: NodeReady
	  Normal  RegisteredNode           8m10s                  node-controller  Node functional-399479 event: Registered Node functional-399479 in Controller
	  Normal  NodeHasNoDiskPressure    6m50s (x8 over 6m50s)  kubelet          Node functional-399479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  6m50s (x8 over 6m50s)  kubelet          Node functional-399479 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m50s (x7 over 6m50s)  kubelet          Node functional-399479 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m43s                  node-controller  Node functional-399479 event: Registered Node functional-399479 in Controller
	  Normal  Starting                 6m5s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m5s (x8 over 6m5s)    kubelet          Node functional-399479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s (x8 over 6m5s)    kubelet          Node functional-399479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x7 over 6m5s)    kubelet          Node functional-399479 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m58s                  node-controller  Node functional-399479 event: Registered Node functional-399479 in Controller
	
	
	==> dmesg <==
	[  +1.184215] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090417] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108177] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.098755] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.143791] kauditd_printk_skb: 172 callbacks suppressed
	[  +0.190505] kauditd_printk_skb: 12 callbacks suppressed
	[Dec10 05:47] kauditd_printk_skb: 293 callbacks suppressed
	[  +9.732027] kauditd_printk_skb: 306 callbacks suppressed
	[Dec10 05:48] kauditd_printk_skb: 275 callbacks suppressed
	[  +1.877760] kauditd_printk_skb: 87 callbacks suppressed
	[  +1.235180] kauditd_printk_skb: 44 callbacks suppressed
	[  +0.109354] kauditd_printk_skb: 12 callbacks suppressed
	[  +1.039405] kauditd_printk_skb: 178 callbacks suppressed
	[  +4.365207] kauditd_printk_skb: 161 callbacks suppressed
	[Dec10 05:49] kauditd_printk_skb: 140 callbacks suppressed
	[  +2.039720] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000050] kauditd_printk_skb: 81 callbacks suppressed
	[  +4.979405] kauditd_printk_skb: 62 callbacks suppressed
	[  +4.826955] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.996433] kauditd_printk_skb: 61 callbacks suppressed
	[  +0.817702] kauditd_printk_skb: 107 callbacks suppressed
	[Dec10 05:50] crun[12089]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.954446] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [8f005b76f1ce30a1a58ca40282691ebbd6747c4db863f674caf025df264750ef] <==
	{"level":"warn","ts":"2025-12-10T05:48:55.504254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:55.523375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:55.523504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:55.531357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:55.544703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:55.559360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:55.568558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:55.628718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:49:27.384681Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.417735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-12-10T05:49:27.384924Z","caller":"traceutil/trace.go:172","msg":"trace[277153370] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:748; }","duration":"133.653229ms","start":"2025-12-10T05:49:27.251189Z","end":"2025-12-10T05:49:27.384842Z","steps":["trace[277153370] 'range keys from in-memory index tree'  (duration: 133.286393ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:49:29.757025Z","caller":"traceutil/trace.go:172","msg":"trace[158511892] linearizableReadLoop","detail":"{readStateIndex:843; appliedIndex:843; }","duration":"297.66008ms","start":"2025-12-10T05:49:29.459345Z","end":"2025-12-10T05:49:29.757005Z","steps":["trace[158511892] 'read index received'  (duration: 297.655654ms)","trace[158511892] 'applied index is now lower than readState.Index'  (duration: 3.872µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:49:29.757124Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"297.762843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:49:29.757140Z","caller":"traceutil/trace.go:172","msg":"trace[706153741] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:764; }","duration":"297.793674ms","start":"2025-12-10T05:49:29.459342Z","end":"2025-12-10T05:49:29.757135Z","steps":["trace[706153741] 'agreement among raft nodes before linearized reading'  (duration: 297.732168ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:49:29.757222Z","caller":"traceutil/trace.go:172","msg":"trace[1277509262] transaction","detail":"{read_only:false; response_revision:765; number_of_response:1; }","duration":"347.983591ms","start":"2025-12-10T05:49:29.409228Z","end":"2025-12-10T05:49:29.757212Z","steps":["trace[1277509262] 'process raft request'  (duration: 347.897412ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:49:29.757950Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T05:49:29.409210Z","time spent":"348.04999ms","remote":"127.0.0.1:46506","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:749 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-10T05:49:32.032203Z","caller":"traceutil/trace.go:172","msg":"trace[104342288] linearizableReadLoop","detail":"{readStateIndex:844; appliedIndex:844; }","duration":"248.484187ms","start":"2025-12-10T05:49:31.783697Z","end":"2025-12-10T05:49:32.032181Z","steps":["trace[104342288] 'read index received'  (duration: 248.479623ms)","trace[104342288] 'applied index is now lower than readState.Index'  (duration: 4.037µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:49:32.032337Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"248.755441ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:49:32.032357Z","caller":"traceutil/trace.go:172","msg":"trace[723768851] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:765; }","duration":"248.789726ms","start":"2025-12-10T05:49:31.783562Z","end":"2025-12-10T05:49:32.032352Z","steps":["trace[723768851] 'agreement among raft nodes before linearized reading'  (duration: 248.72597ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:49:32.032604Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"228.612176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:49:32.032632Z","caller":"traceutil/trace.go:172","msg":"trace[565705583] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:766; }","duration":"228.632352ms","start":"2025-12-10T05:49:31.803985Z","end":"2025-12-10T05:49:32.032617Z","steps":["trace[565705583] 'agreement among raft nodes before linearized reading'  (duration: 228.60059ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:49:32.032776Z","caller":"traceutil/trace.go:172","msg":"trace[408533512] transaction","detail":"{read_only:false; response_revision:766; number_of_response:1; }","duration":"259.585453ms","start":"2025-12-10T05:49:31.773184Z","end":"2025-12-10T05:49:32.032770Z","steps":["trace[408533512] 'process raft request'  (duration: 259.329665ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:49:32.032993Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.85981ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:49:32.033035Z","caller":"traceutil/trace.go:172","msg":"trace[1618399507] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:766; }","duration":"178.905408ms","start":"2025-12-10T05:49:31.854124Z","end":"2025-12-10T05:49:32.033029Z","steps":["trace[1618399507] 'agreement among raft nodes before linearized reading'  (duration: 178.845231ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:49:34.220152Z","caller":"traceutil/trace.go:172","msg":"trace[1795910117] transaction","detail":"{read_only:false; response_revision:767; number_of_response:1; }","duration":"173.34648ms","start":"2025-12-10T05:49:34.046793Z","end":"2025-12-10T05:49:34.220139Z","steps":["trace[1795910117] 'process raft request'  (duration: 173.215808ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:49:38.438984Z","caller":"traceutil/trace.go:172","msg":"trace[1742356454] transaction","detail":"{read_only:false; response_revision:781; number_of_response:1; }","duration":"171.364285ms","start":"2025-12-10T05:49:38.267605Z","end":"2025-12-10T05:49:38.438969Z","steps":["trace[1742356454] 'process raft request'  (duration: 171.199391ms)"],"step_count":1}
	
	
	==> etcd [d528a051cb407db10faa2b6eab2550403b093bbdbfcfa526543a2d790f10d2b7] <==
	{"level":"warn","ts":"2025-12-10T05:48:10.453030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:10.461079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:10.471951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:10.489324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:10.499265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:10.507363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:10.590730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33642","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T05:48:34.508815Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-10T05:48:34.516700Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-399479","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.97:2380"],"advertise-client-urls":["https://192.168.50.97:2379"]}
	{"level":"error","ts":"2025-12-10T05:48:34.517175Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T05:48:34.588690Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T05:48:34.590168Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T05:48:34.590233Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1f2cc3497df204b1","current-leader-member-id":"1f2cc3497df204b1"}
	{"level":"warn","ts":"2025-12-10T05:48:34.590206Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T05:48:34.590303Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T05:48:34.590315Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T05:48:34.590337Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-10T05:48:34.590387Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-10T05:48:34.590394Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.97:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T05:48:34.590416Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.97:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T05:48:34.590423Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.97:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T05:48:34.594468Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.97:2380"}
	{"level":"error","ts":"2025-12-10T05:48:34.594553Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.97:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T05:48:34.594577Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.97:2380"}
	{"level":"info","ts":"2025-12-10T05:48:34.594583Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-399479","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.97:2380"],"advertise-client-urls":["https://192.168.50.97:2379"]}
	
	
	==> kernel <==
	 05:54:57 up 9 min,  0 users,  load average: 0.02, 0.45, 0.35
	Linux functional-399479 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [099afaefdab639c10f0fea0229d6f39df0bda640b494890a88b9fcf30f00098c] <==
	I1210 05:48:56.409752       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 05:48:56.417192       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 05:48:56.460956       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 05:48:56.464056       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 05:48:57.176999       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 05:48:58.087060       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 05:48:58.171777       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 05:48:58.236083       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 05:48:58.248290       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 05:48:59.858108       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 05:49:00.114702       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 05:49:00.215477       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 05:49:14.616352       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.224.43"}
	I1210 05:49:19.694072       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.132.113"}
	I1210 05:49:21.450904       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.200.250"}
	I1210 05:49:41.039609       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.148.88"}
	E1210 05:49:44.472046       1 conn.go:339] Error on socket receive: read tcp 192.168.50.97:8441->192.168.50.1:45482: use of closed network connection
	E1210 05:49:45.937432       1 conn.go:339] Error on socket receive: read tcp 192.168.50.97:8441->192.168.50.1:45510: use of closed network connection
	E1210 05:49:46.735076       1 conn.go:339] Error on socket receive: read tcp 192.168.50.97:8441->192.168.50.1:49836: use of closed network connection
	E1210 05:49:48.721543       1 conn.go:339] Error on socket receive: read tcp 192.168.50.97:8441->192.168.50.1:49850: use of closed network connection
	E1210 05:49:53.870275       1 conn.go:339] Error on socket receive: read tcp 192.168.50.97:8441->192.168.50.1:49864: use of closed network connection
	E1210 05:49:54.113323       1 conn.go:339] Error on socket receive: read tcp 192.168.50.97:8441->192.168.50.1:49888: use of closed network connection
	I1210 05:49:57.070494       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 05:49:57.379515       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.61.116"}
	I1210 05:49:57.423300       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.17.229"}
	
	
	==> kube-controller-manager [9d9a073798f95464023ab2f00c3c4f6673e6f656c3d1d416620e93a0f1c627ff] <==
	I1210 05:48:14.621449       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 05:48:14.622804       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 05:48:14.622839       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 05:48:14.625332       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 05:48:14.629690       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 05:48:14.632957       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 05:48:14.634170       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 05:48:14.640529       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 05:48:14.641801       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 05:48:14.648311       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 05:48:14.652835       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 05:48:14.658431       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 05:48:14.662674       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 05:48:14.662915       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 05:48:14.663002       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 05:48:14.663597       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 05:48:14.663752       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 05:48:14.663803       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 05:48:14.663841       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 05:48:14.663847       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 05:48:14.665973       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 05:48:14.666157       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 05:48:14.666651       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 05:48:14.672964       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 05:48:14.686097       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-controller-manager [a9d50c80fc084dae7743a652ba587683cd8ccb59fea82e2295b1ba3fc1eb95a6] <==
	I1210 05:48:59.800053       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 05:48:59.801185       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 05:48:59.832111       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 05:48:59.832367       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 05:48:59.834285       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 05:48:59.834370       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 05:48:59.834405       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 05:48:59.834493       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 05:48:59.834510       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 05:48:59.834525       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 05:48:59.834651       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 05:48:59.834706       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 05:48:59.834755       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 05:48:59.835070       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 05:48:59.835146       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 05:48:59.835938       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 05:48:59.851748       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1210 05:49:57.183727       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.202094       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.207419       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.209673       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.216698       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.221192       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.226012       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.240010       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [8c5191516b43ed9a9aac9e9e40e0db020d98c0c6ac92fe13c623d7cfc4168ea7] <==
	I1210 05:48:59.105935       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 05:48:59.213366       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 05:48:59.213418       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.97"]
	E1210 05:48:59.213480       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 05:48:59.364268       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 05:48:59.364348       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 05:48:59.364372       1 server_linux.go:132] "Using iptables Proxier"
	I1210 05:48:59.391299       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 05:48:59.392342       1 server.go:527] "Version info" version="v1.34.3"
	I1210 05:48:59.392375       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:48:59.403200       1 config.go:200] "Starting service config controller"
	I1210 05:48:59.403235       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 05:48:59.403255       1 config.go:106] "Starting endpoint slice config controller"
	I1210 05:48:59.403259       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 05:48:59.403285       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 05:48:59.403288       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 05:48:59.404075       1 config.go:309] "Starting node config controller"
	I1210 05:48:59.404106       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 05:48:59.404113       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 05:48:59.503847       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 05:48:59.503903       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 05:48:59.503939       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a5eff65a81559b983be9329bb1512bed25d3c581ad1f00ee22a500646b1a5ea3] <==
	I1210 05:48:12.016359       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 05:48:12.117553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 05:48:12.117622       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.97"]
	E1210 05:48:12.117718       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 05:48:12.188202       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 05:48:12.188929       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 05:48:12.189074       1 server_linux.go:132] "Using iptables Proxier"
	I1210 05:48:12.206243       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 05:48:12.206552       1 server.go:527] "Version info" version="v1.34.3"
	I1210 05:48:12.206659       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:48:12.217764       1 config.go:200] "Starting service config controller"
	I1210 05:48:12.217802       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 05:48:12.217820       1 config.go:106] "Starting endpoint slice config controller"
	I1210 05:48:12.217824       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 05:48:12.217834       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 05:48:12.217837       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 05:48:12.218161       1 config.go:309] "Starting node config controller"
	I1210 05:48:12.218194       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 05:48:12.218250       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 05:48:12.318731       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 05:48:12.319026       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 05:48:12.319225       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [184dd25c9539e8cb73ef1749892f9843c3a65f879d24a33add3538060f79e711] <==
	E1210 05:48:11.311223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 05:48:11.311308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:48:11.311374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 05:48:11.314461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 05:48:11.318087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 05:48:11.318480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:48:11.318574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:48:11.318670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 05:48:11.318725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 05:48:11.318773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 05:48:11.318824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 05:48:11.318938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 05:48:11.319014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:48:11.319063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:48:11.319146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:48:11.319263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 05:48:11.319340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 05:48:11.319364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1210 05:48:12.801410       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:48:34.531453       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1210 05:48:34.531499       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1210 05:48:34.531515       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1210 05:48:34.531540       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:48:34.531666       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1210 05:48:34.531704       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [5c128b1a04cbdfd6e43bfd4e0974d8e739cdae1f5e405ba6642aa6c96c4ef112] <==
	W1210 05:48:56.240480       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 05:48:56.293022       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1210 05:48:56.293069       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:48:56.306145       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 05:48:56.306266       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:48:56.306300       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:48:56.306315       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 05:48:56.334670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 05:48:56.336249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 05:48:56.336348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 05:48:56.347202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 05:48:56.347506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:48:56.347936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:48:56.348326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 05:48:56.349717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:48:56.349927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:48:56.351495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 05:48:56.351632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 05:48:56.351704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:48:56.351171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 05:48:56.354284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:48:56.362683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 05:48:56.363955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 05:48:56.366774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1210 05:48:56.406394       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 05:54:02 functional-399479 kubelet[7912]: E1210 05:54:02.649113    7912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765346042648546868  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:54:02 functional-399479 kubelet[7912]: E1210 05:54:02.649152    7912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765346042648546868  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:54:05 functional-399479 kubelet[7912]: E1210 05:54:05.378458    7912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-x55vv" podUID="cfee82c9-4063-410a-bbcc-2e1c3f0a5df9"
	Dec 10 05:54:12 functional-399479 kubelet[7912]: E1210 05:54:12.652129    7912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765346052651531158  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:54:12 functional-399479 kubelet[7912]: E1210 05:54:12.652190    7912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765346052651531158  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:54:22 functional-399479 kubelet[7912]: E1210 05:54:22.654339    7912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765346062653908776  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:54:22 functional-399479 kubelet[7912]: E1210 05:54:22.654366    7912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765346062653908776  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:54:29 functional-399479 kubelet[7912]: E1210 05:54:29.028393    7912 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 10 05:54:29 functional-399479 kubelet[7912]: E1210 05:54:29.028484    7912 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 10 05:54:29 functional-399479 kubelet[7912]: E1210 05:54:29.028748    7912 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-bw6lk_kubernetes-dashboard(8e8153f5-47af-4825-99fe-dafe709bacc8): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 10 05:54:29 functional-399479 kubelet[7912]: E1210 05:54:29.028785    7912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-bw6lk" podUID="8e8153f5-47af-4825-99fe-dafe709bacc8"
	Dec 10 05:54:32 functional-399479 kubelet[7912]: E1210 05:54:32.656941    7912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765346072656403517  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:54:32 functional-399479 kubelet[7912]: E1210 05:54:32.656990    7912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765346072656403517  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:54:40 functional-399479 kubelet[7912]: E1210 05:54:40.381737    7912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-bw6lk" podUID="8e8153f5-47af-4825-99fe-dafe709bacc8"
	Dec 10 05:54:42 functional-399479 kubelet[7912]: E1210 05:54:42.659727    7912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765346082659246535  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:54:42 functional-399479 kubelet[7912]: E1210 05:54:42.659772    7912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765346082659246535  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:54:51 functional-399479 kubelet[7912]: E1210 05:54:51.382309    7912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-bw6lk" podUID="8e8153f5-47af-4825-99fe-dafe709bacc8"
	Dec 10 05:54:52 functional-399479 kubelet[7912]: E1210 05:54:52.472969    7912 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod7774b15b-c3b0-40b0-8e5b-d38cffdfc273/crio-3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96: Error finding container 3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96: Status 404 returned error can't find the container with id 3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96
	Dec 10 05:54:52 functional-399479 kubelet[7912]: E1210 05:54:52.473392    7912 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod6e2aee2d-5391-4df0-a992-5575af56d257/crio-1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996: Error finding container 1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996: Status 404 returned error can't find the container with id 1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996
	Dec 10 05:54:52 functional-399479 kubelet[7912]: E1210 05:54:52.473805    7912 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed/crio-c75ec0e67c43c2c096ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2: Error finding container c75ec0e67c43c2c096ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2: Status 404 returned error can't find the container with id c75ec0e67c43c2c096ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2
	Dec 10 05:54:52 functional-399479 kubelet[7912]: E1210 05:54:52.474189    7912 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7a1a6815645cc50cb4652da6da4d32ca/crio-66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd: Error finding container 66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd: Status 404 returned error can't find the container with id 66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd
	Dec 10 05:54:52 functional-399479 kubelet[7912]: E1210 05:54:52.474609    7912 manager.go:1116] Failed to create existing container: /kubepods/burstable/podce1778fed278bf50a28f60971e954135/crio-a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7: Error finding container a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7: Status 404 returned error can't find the container with id a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7
	Dec 10 05:54:52 functional-399479 kubelet[7912]: E1210 05:54:52.475155    7912 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod05304dd4db39c178d9ebfffb5459860b/crio-1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff: Error finding container 1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff: Status 404 returned error can't find the container with id 1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff
	Dec 10 05:54:52 functional-399479 kubelet[7912]: E1210 05:54:52.662462    7912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765346092661614217  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:54:52 functional-399479 kubelet[7912]: E1210 05:54:52.662491    7912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765346092661614217  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	
	
	==> storage-provisioner [21b79b9c434edade68566f44e7dadb72fa7d05782ecba5d32113b8602a8c61ca] <==
	I1210 05:48:11.868458       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 05:48:11.906763       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 05:48:11.907047       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 05:48:11.914079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:15.371837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:19.632302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:23.231498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:26.286771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:29.310650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:29.321509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 05:48:29.322181       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 05:48:29.322492       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-399479_4eb6676b-7c31-4ff1-9c86-f4cf7f15733a!
	I1210 05:48:29.328567       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35171c54-a4d5-44ba-8195-e1ba30809a0f", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-399479_4eb6676b-7c31-4ff1-9c86-f4cf7f15733a became leader
	W1210 05:48:29.336133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:29.346734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 05:48:29.423539       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-399479_4eb6676b-7c31-4ff1-9c86-f4cf7f15733a!
	W1210 05:48:31.351249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:31.358112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:33.362324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:33.371482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [96cb8c02f153f2fc995c27d8026ef4e83db634bf200bbbb7a9d1b281259506ea] <==
	W1210 05:54:32.259664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:34.262770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:34.268335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:36.271609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:36.277423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:38.281308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:38.290945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:40.295724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:40.301311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:42.305840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:42.311534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:44.315838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:44.322055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:46.325735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:46.331778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:48.336357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:48.346388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:50.349848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:50.355028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:52.359319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:52.365700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:54.369796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:54.375191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:56.378734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:54:56.386484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-399479 -n functional-399479
helpers_test.go:270: (dbg) Run:  kubectl --context functional-399479 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-75c85bcc94-x55vv hello-node-connect-7d85dfc575-zdq58 dashboard-metrics-scraper-77bf4d6c4c-bw6lk kubernetes-dashboard-855c9754f9-d59w5
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-399479 describe pod busybox-mount hello-node-75c85bcc94-x55vv hello-node-connect-7d85dfc575-zdq58 dashboard-metrics-scraper-77bf4d6c4c-bw6lk kubernetes-dashboard-855c9754f9-d59w5
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-399479 describe pod busybox-mount hello-node-75c85bcc94-x55vv hello-node-connect-7d85dfc575-zdq58 dashboard-metrics-scraper-77bf4d6c4c-bw6lk kubernetes-dashboard-855c9754f9-d59w5: exit status 1 (93.567333ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-399479/192.168.50.97
	Start Time:       Wed, 10 Dec 2025 05:49:55 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.14
	IPs:
	  IP:  10.244.0.14
	Containers:
	  mount-munger:
	    Container ID:  cri-o://d112008e1180795d62e2a177aeb1d75655a06b758ffe36b7a40bfff61065aa1f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 10 Dec 2025 05:50:14 +0000
	      Finished:     Wed, 10 Dec 2025 05:50:14 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zqs4z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zqs4z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m3s   default-scheduler  Successfully assigned default/busybox-mount to functional-399479
	  Normal  Pulling    5m2s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m44s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.265s (18.085s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m44s  kubelet            Created container: mount-munger
	  Normal  Started    4m44s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-x55vv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-399479/192.168.50.97
	Start Time:       Wed, 10 Dec 2025 05:49:40 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c6zw5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c6zw5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m18s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-x55vv to functional-399479
	  Warning  Failed     4m47s                kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m                   kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     90s (x3 over 4m47s)  kubelet            Error: ErrImagePull
	  Warning  Failed     90s                  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    53s (x5 over 4m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     53s (x5 over 4m47s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    38s (x4 over 5m17s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-zdq58
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-399479/192.168.50.97
	Start Time:       Wed, 10 Dec 2025 05:49:21 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-djtgd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-djtgd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m37s  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zdq58 to functional-399479

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-bw6lk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-d59w5" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-399479 describe pod busybox-mount hello-node-75c85bcc94-x55vv hello-node-connect-7d85dfc575-zdq58 dashboard-metrics-scraper-77bf4d6c4c-bw6lk kubernetes-dashboard-855c9754f9-d59w5: exit status 1
E1210 05:56:00.544848  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-399479 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-399479 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-zdq58" [5de3fbcd-614c-459f-a654-a6ad3ee41b2a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:338: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-399479 -n functional-399479
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-10 05:59:21.716623455 +0000 UTC m=+1845.768075582
functional_test.go:1645: (dbg) Run:  kubectl --context functional-399479 describe po hello-node-connect-7d85dfc575-zdq58 -n default
functional_test.go:1645: (dbg) kubectl --context functional-399479 describe po hello-node-connect-7d85dfc575-zdq58 -n default:
Name:             hello-node-connect-7d85dfc575-zdq58
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-399479/192.168.50.97
Start Time:       Wed, 10 Dec 2025 05:49:21 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-djtgd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-djtgd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  10m   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zdq58 to functional-399479
functional_test.go:1645: (dbg) Run:  kubectl --context functional-399479 logs hello-node-connect-7d85dfc575-zdq58 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-399479 logs hello-node-connect-7d85dfc575-zdq58 -n default: exit status 1 (76.479467ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-zdq58" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-399479 logs hello-node-connect-7d85dfc575-zdq58 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-399479 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-zdq58
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-399479/192.168.50.97
Start Time:       Wed, 10 Dec 2025 05:49:21 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-djtgd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-djtgd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  10m   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zdq58 to functional-399479

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-399479 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-399479 logs -l app=hello-node-connect: exit status 1 (69.674222ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-zdq58" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-399479 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-399479 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.200.250
IPs:                      10.104.200.250
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32086/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-399479 -n functional-399479
helpers_test.go:253: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-399479 logs -n 25: (1.447941598s)
helpers_test.go:261: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-399479 ssh sudo umount -f /mount-9p                                                                                    │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ ssh            │ functional-399479 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ mount          │ -p functional-399479 /tmp/TestFunctionalparallelMountCmdspecific-port3496322953/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ ssh            │ functional-399479 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ ssh            │ functional-399479 ssh -- ls -la /mount-9p                                                                                         │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ ssh            │ functional-399479 ssh sudo umount -f /mount-9p                                                                                    │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ mount          │ -p functional-399479 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3391979705/001:/mount1 --alsologtostderr -v=1                │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ mount          │ -p functional-399479 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3391979705/001:/mount2 --alsologtostderr -v=1                │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ mount          │ -p functional-399479 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3391979705/001:/mount3 --alsologtostderr -v=1                │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ ssh            │ functional-399479 ssh findmnt -T /mount1                                                                                          │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ ssh            │ functional-399479 ssh findmnt -T /mount1                                                                                          │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ ssh            │ functional-399479 ssh findmnt -T /mount2                                                                                          │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ ssh            │ functional-399479 ssh findmnt -T /mount3                                                                                          │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ mount          │ -p functional-399479 --kill=true                                                                                                  │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ license        │                                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ update-context │ functional-399479 update-context --alsologtostderr -v=2                                                                           │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ update-context │ functional-399479 update-context --alsologtostderr -v=2                                                                           │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ update-context │ functional-399479 update-context --alsologtostderr -v=2                                                                           │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ image          │ functional-399479 image ls --format short --alsologtostderr                                                                       │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ image          │ functional-399479 image ls --format yaml --alsologtostderr                                                                        │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ ssh            │ functional-399479 ssh pgrep buildkitd                                                                                             │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │                     │
	│ image          │ functional-399479 image build -t localhost/my-image:functional-399479 testdata/build --alsologtostderr                            │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ image          │ functional-399479 image ls                                                                                                        │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ image          │ functional-399479 image ls --format json --alsologtostderr                                                                        │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	│ image          │ functional-399479 image ls --format table --alsologtostderr                                                                       │ functional-399479 │ jenkins │ v1.37.0 │ 10 Dec 25 05:50 UTC │ 10 Dec 25 05:50 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:49:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:49:56.071243  258873 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:49:56.071361  258873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:49:56.071370  258873 out.go:374] Setting ErrFile to fd 2...
	I1210 05:49:56.071374  258873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:49:56.071566  258873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 05:49:56.072056  258873 out.go:368] Setting JSON to false
	I1210 05:49:56.072939  258873 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27143,"bootTime":1765318653,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:49:56.073011  258873 start.go:143] virtualization: kvm guest
	I1210 05:49:56.074908  258873 out.go:179] * [functional-399479] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:49:56.076720  258873 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:49:56.076741  258873 notify.go:221] Checking for updates...
	I1210 05:49:56.079558  258873 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:49:56.080821  258873 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 05:49:56.082034  258873 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:49:56.083333  258873 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:49:56.084568  258873 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:49:56.086239  258873 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:49:56.086702  258873 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:49:56.118830  258873 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 05:49:56.120383  258873 start.go:309] selected driver: kvm2
	I1210 05:49:56.120398  258873 start.go:927] validating driver "kvm2" against &{Name:functional-399479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-399479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:49:56.120509  258873 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:49:56.121479  258873 cni.go:84] Creating CNI manager for ""
	I1210 05:49:56.121539  258873 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:49:56.121590  258873 start.go:353] cluster config:
	{Name:functional-399479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-399479 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:49:56.123027  258873 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.793543664Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42194bd8-81bd-43fb-a45b-df48a6f51dc5 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.795110272Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29602507-4cf9-4ea0-b73f-b0cf0f28d6e5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.795807766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765346362795781986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:210362,},InodesUsed:&UInt64Value{Value:99,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29602507-4cf9-4ea0-b73f-b0cf0f28d6e5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.797500496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24b845f4-b3d3-46cc-ad6d-250c9c71bfce name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.797600826Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24b845f4-b3d3-46cc-ad6d-250c9c71bfce name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.797984714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d112008e1180795d62e2a177aeb1d75655a06b758ffe36b7a40bfff61065aa1f,PodSandboxId:f293640667c22b5c7dbb9ae4c492fbe5e8be955ff34b269efd8939bb1fd14683,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765345814149012042,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 927b8082-ea8c-4e49-a503-c6c8b3ff138b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7538813a3fbfdb4d0580b3108ad7916ad08039aad2fbbdd5d4d540422fd424,PodSandboxId:204842a4f6efbd2cae7c4dddbd5d34f6a3d729df20798547c945154b74f49c4b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345787552626404,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a50b332a-ac62-404c-99a2-9880df367d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed485bcdae2f9dd4422271ba8dfbe64b34770d7bc632fcaca2ca015fa673b22a,PodSandboxId:deecdbdd3e1a5fed16a273d9bb38bb62ce6387ba32ea5eec08cd4448677fb320,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765345776223219145,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-vl4tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 234c01a1-0983-4d96-98a7-4e5dc0f914d3,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f1fd2b453be6ec0b757ecf35e456e0719cf02e36e04437ac365672e5d98495,PodSandboxId:fad8e6374bbaa29f287132015a9d37d485201a7a586b8cf824dfdf06136affc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765345739022574322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5191516b43ed9a9aac9e9e40e0db020d98c0c6ac92fe13c623d7cfc4168ea7,PodSandboxId:093979a6e1eb4711ec30269600e9324c75fb7da3db9f6059b98407c720d66988,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765345738555287129,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cb8c02f153f2fc995c27d8026ef4e83db634bf200bbbb7a9d1b281259506ea,PodSandboxId:569b9f50a4e64685a84fbac953062ccd6c7201fc75bbd16cb8d284947ef61444,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765345
737630367102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c128b1a04cbdfd6e43bfd4e0974d8e739cdae1f5e405ba6642aa6c96c4ef112,PodSandboxId:b644ef5636424c4c38407220fd047493dff42936e7c6c8d196f0a81bfdee8fa9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765345733337419453,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f005b76f1ce30a1a58ca40282691ebbd6747c4db863f674caf025df264750ef,PodSandboxId:ac7d56ebd4c712bc823cfcb2be25e8f4743323052b7631d670851cfb54a5379e,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e28500
85ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765345733283754079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099afaefdab639c10f0fea0229d6f39df0bda640b494890a88b9fcf30f00098c,PodSandboxId:2c3d9225ccdeca59e0bd22c26d5aedf53dd49810ed93ffc105bace7af0ad0cf0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765345733277748251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13918951b31227ef1afd98d22dc8f498,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d50c80fc084dae7743a652ba587683cd8ccb59fea82e2295b1ba3fc1eb95a6,PodSandboxId:3e7ec3844fea7e596dea92a1a88e9cef6f8c99ea19e3145b20ba0fb6a52e550d,Metadata:&ContainerMetadata{Name:kube-controller-manag
er,Attempt:4,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765345733231581685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eff65a81559b983be9329bb1512bed25d3c581ad1f00ee22a500646b1a
5ea3,PodSandboxId:1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765345691697716547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f7134ab506877facbe1fdbaf19d67b9941db290a03df6595e30df928f8b2ef,PodSandboxId:c75ec0e67c43c2c0
96ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765345691696166720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b79b9c434edade68566f44e7dadb72fa7d05782ecba5d32113b8602a8c61ca,PodSandboxId:3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765345691692748511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184dd25c9539e8cb73ef1749892f9843c3a65f879d24a33add3538060f79e711,PodSandboxId:66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765345688093212273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9a073798f95464023ab2f00c3c4f6673e6f656c3d1d416620e93a0f1c627ff,PodSandboxId:1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765345688083167069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d528a051cb407db10faa2b6eab2550403b093bbdbfcfa526543a2d790f10d2b7,PodSandboxId:a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765345688061118034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24b845f4-b3d3-46cc-ad6d-250c9c71bfce name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.804483300Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=6c73030a-5cd1-4540-b7d0-edc77c22b6ff name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.806390108Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7ba0047fb61e4e2b5701aa682814122084659caa86ac540a10dd1a59130e6155,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-d59w5,Uid:fa93e8d0-e74d-46e8-b269-0ada90ae5c76,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345797662758041,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-d59w5,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fa93e8d0-e74d-46e8-b269-0ada90ae5c76,k8s-app: kubernetes-dashboard,pod-template-hash: 855c9754f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:49:57.326019987Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:37d3c114e8d0a3ba8909d6457d6fe5a638791c7485b5399b2fdebdf263faaae9,Metadata:&PodSandboxMetadata{Name
:dashboard-metrics-scraper-77bf4d6c4c-bw6lk,Uid:8e8153f5-47af-4825-99fe-dafe709bacc8,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345797605562150,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-bw6lk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8e8153f5-47af-4825-99fe-dafe709bacc8,k8s-app: dashboard-metrics-scraper,pod-template-hash: 77bf4d6c4c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:49:57.284163347Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:f293640667c22b5c7dbb9ae4c492fbe5e8be955ff34b269efd8939bb1fd14683,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:927b8082-ea8c-4e49-a503-c6c8b3ff138b,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1765345795746686512,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,i
o.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 927b8082-ea8c-4e49-a503-c6c8b3ff138b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:49:55.415741748Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:204842a4f6efbd2cae7c4dddbd5d34f6a3d729df20798547c945154b74f49c4b,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:a50b332a-ac62-404c-99a2-9880df367d9c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345787284998814,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a50b332a-ac62-404c-99a2-9880df367d9c,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"publi
c.ecr.aws/nginx/nginx:alpine\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-12-10T05:49:46.963748363Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4b7632707bf4c73d47e16c61f215e5c4f7888a14beb9153d2c9a8b49d3d497cb,Metadata:&PodSandboxMetadata{Name:hello-node-75c85bcc94-x55vv,Uid:cfee82c9-4063-410a-bbcc-2e1c3f0a5df9,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345781317412144,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-75c85bcc94-x55vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfee82c9-4063-410a-bbcc-2e1c3f0a5df9,pod-template-hash: 75c85bcc94,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:49:40.981034205Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:deecdbdd3e1a5fed16a27
3d9bb38bb62ce6387ba32ea5eec08cd4448677fb320,Metadata:&PodSandboxMetadata{Name:mysql-6bcdcbc558-vl4tc,Uid:234c01a1-0983-4d96-98a7-4e5dc0f914d3,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345760140282615,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-6bcdcbc558-vl4tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 234c01a1-0983-4d96-98a7-4e5dc0f914d3,pod-template-hash: 6bcdcbc558,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:49:19.808917098Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fad8e6374bbaa29f287132015a9d37d485201a7a586b8cf824dfdf06136affc1,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-jcttb,Uid:07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,Namespace:kube-system,Attempt:4,},State:SANDBOX_READY,CreatedAt:1765345738267064738,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:48:56.299375627Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:093979a6e1eb4711ec30269600e9324c75fb7da3db9f6059b98407c720d66988,Metadata:&PodSandboxMetadata{Name:kube-proxy-zf8c6,Uid:6e2aee2d-5391-4df0-a992-5575af56d257,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765345738139160307,Labels:map[string]string{controller-revision-hash: 55c7cb7b75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:48:56.299367545Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:569b9f50a4e64685a84fbac953062ccd6c7201fc75bbd16cb8d284947ef61444,Metadata:&P
odSandboxMetadata{Name:storage-provisioner,Uid:7774b15b-c3b0-40b0-8e5b-d38cffdfc273,Namespace:kube-system,Attempt:4,},State:SANDBOX_READY,CreatedAt:1765345737530561783,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":
\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-10T05:48:56.299374280Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b644ef5636424c4c38407220fd047493dff42936e7c6c8d196f0a81bfdee8fa9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-399479,Uid:7a1a6815645cc50cb4652da6da4d32ca,Namespace:kube-system,Attempt:4,},State:SANDBOX_READY,CreatedAt:1765345733052037976,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7a1a6815645cc50cb4652da6da4d32ca,kubernetes.io/config.seen: 2025-12-10T05:48:52.287593985Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandb
ox{Id:3e7ec3844fea7e596dea92a1a88e9cef6f8c99ea19e3145b20ba0fb6a52e550d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-399479,Uid:05304dd4db39c178d9ebfffb5459860b,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765345733038787779,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 05304dd4db39c178d9ebfffb5459860b,kubernetes.io/config.seen: 2025-12-10T05:48:52.287599929Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c3d9225ccdeca59e0bd22c26d5aedf53dd49810ed93ffc105bace7af0ad0cf0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-399479,Uid:13918951b31227ef1afd98d22dc8f498,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765345733019052942,Labels:
map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13918951b31227ef1afd98d22dc8f498,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.97:8441,kubernetes.io/config.hash: 13918951b31227ef1afd98d22dc8f498,kubernetes.io/config.seen: 2025-12-10T05:48:52.287599017Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ac7d56ebd4c712bc823cfcb2be25e8f4743323052b7631d670851cfb54a5379e,Metadata:&PodSandboxMetadata{Name:etcd-functional-399479,Uid:ce1778fed278bf50a28f60971e954135,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765345733014277509,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,tier: c
ontrol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.97:2379,kubernetes.io/config.hash: ce1778fed278bf50a28f60971e954135,kubernetes.io/config.seen: 2025-12-10T05:48:52.287597927Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c75ec0e67c43c2c096ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-jcttb,Uid:07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765345671643303564,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:46:48.818417662Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a683b4bcb97ac754c4274849b2ec011e33ee899c04694587
b63c5956f9d6a4f7,Metadata:&PodSandboxMetadata{Name:etcd-functional-399479,Uid:ce1778fed278bf50a28f60971e954135,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765345671448367453,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.97:2379,kubernetes.io/config.hash: ce1778fed278bf50a28f60971e954135,kubernetes.io/config.seen: 2025-12-10T05:46:43.172789924Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7774b15b-c3b0-40b0-8e5b-d38cffdfc273,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765345671266482639,Labels:map[string]string{addonmanager.kubern
etes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-10T0
5:46:50.669576268Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996,Metadata:&PodSandboxMetadata{Name:kube-proxy-zf8c6,Uid:6e2aee2d-5391-4df0-a992-5575af56d257,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765345671247390722,Labels:map[string]string{controller-revision-hash: 55c7cb7b75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-10T05:46:48.656052018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-399479,Uid:7a1a6815645cc50cb4652da6da4d32ca,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:17653456712165
61029,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7a1a6815645cc50cb4652da6da4d32ca,kubernetes.io/config.seen: 2025-12-10T05:46:43.172796929Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-399479,Uid:05304dd4db39c178d9ebfffb5459860b,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765345671179308069,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,tier: control-plan
e,},Annotations:map[string]string{kubernetes.io/config.hash: 05304dd4db39c178d9ebfffb5459860b,kubernetes.io/config.seen: 2025-12-10T05:46:43.172795789Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6c73030a-5cd1-4540-b7d0-edc77c22b6ff name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.810960309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19ae456f-5832-4c93-b652-92462c288203 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.811534873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19ae456f-5832-4c93-b652-92462c288203 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.812606879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d112008e1180795d62e2a177aeb1d75655a06b758ffe36b7a40bfff61065aa1f,PodSandboxId:f293640667c22b5c7dbb9ae4c492fbe5e8be955ff34b269efd8939bb1fd14683,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765345814149012042,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 927b8082-ea8c-4e49-a503-c6c8b3ff138b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7538813a3fbfdb4d0580b3108ad7916ad08039aad2fbbdd5d4d540422fd424,PodSandboxId:204842a4f6efbd2cae7c4dddbd5d34f6a3d729df20798547c945154b74f49c4b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345787552626404,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a50b332a-ac62-404c-99a2-9880df367d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed485bcdae2f9dd4422271ba8dfbe64b34770d7bc632fcaca2ca015fa673b22a,PodSandboxId:deecdbdd3e1a5fed16a273d9bb38bb62ce6387ba32ea5eec08cd4448677fb320,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765345776223219145,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-vl4tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 234c01a1-0983-4d96-98a7-4e5dc0f914d3,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f1fd2b453be6ec0b757ecf35e456e0719cf02e36e04437ac365672e5d98495,PodSandboxId:fad8e6374bbaa29f287132015a9d37d485201a7a586b8cf824dfdf06136affc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765345739022574322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5191516b43ed9a9aac9e9e40e0db020d98c0c6ac92fe13c623d7cfc4168ea7,PodSandboxId:093979a6e1eb4711ec30269600e9324c75fb7da3db9f6059b98407c720d66988,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765345738555287129,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cb8c02f153f2fc995c27d8026ef4e83db634bf200bbbb7a9d1b281259506ea,PodSandboxId:569b9f50a4e64685a84fbac953062ccd6c7201fc75bbd16cb8d284947ef61444,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765345
737630367102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c128b1a04cbdfd6e43bfd4e0974d8e739cdae1f5e405ba6642aa6c96c4ef112,PodSandboxId:b644ef5636424c4c38407220fd047493dff42936e7c6c8d196f0a81bfdee8fa9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765345733337419453,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f005b76f1ce30a1a58ca40282691ebbd6747c4db863f674caf025df264750ef,PodSandboxId:ac7d56ebd4c712bc823cfcb2be25e8f4743323052b7631d670851cfb54a5379e,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e28500
85ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765345733283754079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099afaefdab639c10f0fea0229d6f39df0bda640b494890a88b9fcf30f00098c,PodSandboxId:2c3d9225ccdeca59e0bd22c26d5aedf53dd49810ed93ffc105bace7af0ad0cf0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765345733277748251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13918951b31227ef1afd98d22dc8f498,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d50c80fc084dae7743a652ba587683cd8ccb59fea82e2295b1ba3fc1eb95a6,PodSandboxId:3e7ec3844fea7e596dea92a1a88e9cef6f8c99ea19e3145b20ba0fb6a52e550d,Metadata:&ContainerMetadata{Name:kube-controller-manag
er,Attempt:4,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765345733231581685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eff65a81559b983be9329bb1512bed25d3c581ad1f00ee22a500646b1a
5ea3,PodSandboxId:1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765345691697716547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f7134ab506877facbe1fdbaf19d67b9941db290a03df6595e30df928f8b2ef,PodSandboxId:c75ec0e67c43c2c0
96ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765345691696166720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b79b9c434edade68566f44e7dadb72fa7d05782ecba5d32113b8602a8c61ca,PodSandboxId:3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765345691692748511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184dd25c9539e8cb73ef1749892f9843c3a65f879d24a33add3538060f79e711,PodSandboxId:66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765345688093212273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9a073798f95464023ab2f00c3c4f6673e6f656c3d1d416620e93a0f1c627ff,PodSandboxId:1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765345688083167069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d528a051cb407db10faa2b6eab2550403b093bbdbfcfa526543a2d790f10d2b7,PodSandboxId:a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765345688061118034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19ae456f-5832-4c93-b652-92462c288203 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.842551724Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9e2052c-bc45-4ec8-a7cb-7c02e2f254ac name=/runtime.v1.RuntimeService/Version
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.842633058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9e2052c-bc45-4ec8-a7cb-7c02e2f254ac name=/runtime.v1.RuntimeService/Version
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.844712207Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3923650-df01-46bd-b08a-67e67443f707 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.845452468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765346362845422282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:210362,},InodesUsed:&UInt64Value{Value:99,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3923650-df01-46bd-b08a-67e67443f707 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.846679963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70da4e1e-71bc-4570-9882-27537020da1e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.846956563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70da4e1e-71bc-4570-9882-27537020da1e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.847532486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d112008e1180795d62e2a177aeb1d75655a06b758ffe36b7a40bfff61065aa1f,PodSandboxId:f293640667c22b5c7dbb9ae4c492fbe5e8be955ff34b269efd8939bb1fd14683,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765345814149012042,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 927b8082-ea8c-4e49-a503-c6c8b3ff138b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7538813a3fbfdb4d0580b3108ad7916ad08039aad2fbbdd5d4d540422fd424,PodSandboxId:204842a4f6efbd2cae7c4dddbd5d34f6a3d729df20798547c945154b74f49c4b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345787552626404,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a50b332a-ac62-404c-99a2-9880df367d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed485bcdae2f9dd4422271ba8dfbe64b34770d7bc632fcaca2ca015fa673b22a,PodSandboxId:deecdbdd3e1a5fed16a273d9bb38bb62ce6387ba32ea5eec08cd4448677fb320,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765345776223219145,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-vl4tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 234c01a1-0983-4d96-98a7-4e5dc0f914d3,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f1fd2b453be6ec0b757ecf35e456e0719cf02e36e04437ac365672e5d98495,PodSandboxId:fad8e6374bbaa29f287132015a9d37d485201a7a586b8cf824dfdf06136affc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765345739022574322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5191516b43ed9a9aac9e9e40e0db020d98c0c6ac92fe13c623d7cfc4168ea7,PodSandboxId:093979a6e1eb4711ec30269600e9324c75fb7da3db9f6059b98407c720d66988,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765345738555287129,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cb8c02f153f2fc995c27d8026ef4e83db634bf200bbbb7a9d1b281259506ea,PodSandboxId:569b9f50a4e64685a84fbac953062ccd6c7201fc75bbd16cb8d284947ef61444,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765345
737630367102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c128b1a04cbdfd6e43bfd4e0974d8e739cdae1f5e405ba6642aa6c96c4ef112,PodSandboxId:b644ef5636424c4c38407220fd047493dff42936e7c6c8d196f0a81bfdee8fa9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765345733337419453,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f005b76f1ce30a1a58ca40282691ebbd6747c4db863f674caf025df264750ef,PodSandboxId:ac7d56ebd4c712bc823cfcb2be25e8f4743323052b7631d670851cfb54a5379e,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e28500
85ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765345733283754079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099afaefdab639c10f0fea0229d6f39df0bda640b494890a88b9fcf30f00098c,PodSandboxId:2c3d9225ccdeca59e0bd22c26d5aedf53dd49810ed93ffc105bace7af0ad0cf0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765345733277748251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13918951b31227ef1afd98d22dc8f498,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d50c80fc084dae7743a652ba587683cd8ccb59fea82e2295b1ba3fc1eb95a6,PodSandboxId:3e7ec3844fea7e596dea92a1a88e9cef6f8c99ea19e3145b20ba0fb6a52e550d,Metadata:&ContainerMetadata{Name:kube-controller-manag
er,Attempt:4,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765345733231581685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eff65a81559b983be9329bb1512bed25d3c581ad1f00ee22a500646b1a
5ea3,PodSandboxId:1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765345691697716547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f7134ab506877facbe1fdbaf19d67b9941db290a03df6595e30df928f8b2ef,PodSandboxId:c75ec0e67c43c2c0
96ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765345691696166720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b79b9c434edade68566f44e7dadb72fa7d05782ecba5d32113b8602a8c61ca,PodSandboxId:3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765345691692748511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184dd25c9539e8cb73ef1749892f9843c3a65f879d24a33add3538060f79e711,PodSandboxId:66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765345688093212273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9a073798f95464023ab2f00c3c4f6673e6f656c3d1d416620e93a0f1c627ff,PodSandboxId:1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765345688083167069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d528a051cb407db10faa2b6eab2550403b093bbdbfcfa526543a2d790f10d2b7,PodSandboxId:a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765345688061118034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70da4e1e-71bc-4570-9882-27537020da1e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.880076547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16c09869-8760-449f-b7c5-887143a1e069 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.880168930Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16c09869-8760-449f-b7c5-887143a1e069 name=/runtime.v1.RuntimeService/Version
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.882010818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cac251ae-fafc-4ed6-98ad-325e8c5c17a1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.882704808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765346362882678470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:210362,},InodesUsed:&UInt64Value{Value:99,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cac251ae-fafc-4ed6-98ad-325e8c5c17a1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.883529532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7ec5813-7fb6-4eb5-9d2a-59acd042fcd6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.883585831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7ec5813-7fb6-4eb5-9d2a-59acd042fcd6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 05:59:22 functional-399479 crio[7526]: time="2025-12-10 05:59:22.884084113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d112008e1180795d62e2a177aeb1d75655a06b758ffe36b7a40bfff61065aa1f,PodSandboxId:f293640667c22b5c7dbb9ae4c492fbe5e8be955ff34b269efd8939bb1fd14683,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765345814149012042,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 927b8082-ea8c-4e49-a503-c6c8b3ff138b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7538813a3fbfdb4d0580b3108ad7916ad08039aad2fbbdd5d4d540422fd424,PodSandboxId:204842a4f6efbd2cae7c4dddbd5d34f6a3d729df20798547c945154b74f49c4b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765345787552626404,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a50b332a-ac62-404c-99a2-9880df367d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed485bcdae2f9dd4422271ba8dfbe64b34770d7bc632fcaca2ca015fa673b22a,PodSandboxId:deecdbdd3e1a5fed16a273d9bb38bb62ce6387ba32ea5eec08cd4448677fb320,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765345776223219145,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-vl4tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 234c01a1-0983-4d96-98a7-4e5dc0f914d3,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f1fd2b453be6ec0b757ecf35e456e0719cf02e36e04437ac365672e5d98495,PodSandboxId:fad8e6374bbaa29f287132015a9d37d485201a7a586b8cf824dfdf06136affc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765345739022574322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5191516b43ed9a9aac9e9e40e0db020d98c0c6ac92fe13c623d7cfc4168ea7,PodSandboxId:093979a6e1eb4711ec30269600e9324c75fb7da3db9f6059b98407c720d66988,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765345738555287129,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cb8c02f153f2fc995c27d8026ef4e83db634bf200bbbb7a9d1b281259506ea,PodSandboxId:569b9f50a4e64685a84fbac953062ccd6c7201fc75bbd16cb8d284947ef61444,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765345
737630367102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c128b1a04cbdfd6e43bfd4e0974d8e739cdae1f5e405ba6642aa6c96c4ef112,PodSandboxId:b644ef5636424c4c38407220fd047493dff42936e7c6c8d196f0a81bfdee8fa9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765345733337419453,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f005b76f1ce30a1a58ca40282691ebbd6747c4db863f674caf025df264750ef,PodSandboxId:ac7d56ebd4c712bc823cfcb2be25e8f4743323052b7631d670851cfb54a5379e,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e28500
85ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765345733283754079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099afaefdab639c10f0fea0229d6f39df0bda640b494890a88b9fcf30f00098c,PodSandboxId:2c3d9225ccdeca59e0bd22c26d5aedf53dd49810ed93ffc105bace7af0ad0cf0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765345733277748251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13918951b31227ef1afd98d22dc8f498,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d50c80fc084dae7743a652ba587683cd8ccb59fea82e2295b1ba3fc1eb95a6,PodSandboxId:3e7ec3844fea7e596dea92a1a88e9cef6f8c99ea19e3145b20ba0fb6a52e550d,Metadata:&ContainerMetadata{Name:kube-controller-manag
er,Attempt:4,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765345733231581685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eff65a81559b983be9329bb1512bed25d3c581ad1f00ee22a500646b1a
5ea3,PodSandboxId:1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765345691697716547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zf8c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2aee2d-5391-4df0-a992-5575af56d257,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75f7134ab506877facbe1fdbaf19d67b9941db290a03df6595e30df928f8b2ef,PodSandboxId:c75ec0e67c43c2c0
96ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765345691696166720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jcttb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"
protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b79b9c434edade68566f44e7dadb72fa7d05782ecba5d32113b8602a8c61ca,PodSandboxId:3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765345691692748511,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7774b15b-c3b0-40b0-8e5b-d38cffdfc273,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184dd25c9539e8cb73ef1749892f9843c3a65f879d24a33add3538060f79e711,PodSandboxId:66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765345688093212273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a1a6815645cc50cb4652da6da4d32ca,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9a073798f95464023ab2f00c3c4f6673e6f656c3d1d416620e93a0f1c627ff,PodSandboxId:1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765345688083167069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399479,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 05304dd4db39c178d9ebfffb5459860b,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d528a051cb407db10faa2b6eab2550403b093bbdbfcfa526543a2d790f10d2b7,PodSandboxId:a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765345688061118034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-399479,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1778fed278bf50a28f60971e954135,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7ec5813-7fb6-4eb5-9d2a-59acd042fcd6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d112008e11807       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           9 minutes ago       Exited              mount-munger              0                   f293640667c22       busybox-mount                               default
	6a7538813a3fb       d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9                                              9 minutes ago       Running             myfrontend                0                   204842a4f6efb       sp-pod                                      default
	ed485bcdae2f9       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   9 minutes ago       Running             mysql                     0                   deecdbdd3e1a5       mysql-6bcdcbc558-vl4tc                      default
	67f1fd2b453be       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              10 minutes ago      Running             coredns                   3                   fad8e6374bbaa       coredns-66bc5c9577-jcttb                    kube-system
	8c5191516b43e       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                              10 minutes ago      Running             kube-proxy                4                   093979a6e1eb4       kube-proxy-zf8c6                            kube-system
	96cb8c02f153f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              10 minutes ago      Running             storage-provisioner       4                   569b9f50a4e64       storage-provisioner                         kube-system
	5c128b1a04cbd       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                              10 minutes ago      Running             kube-scheduler            4                   b644ef5636424       kube-scheduler-functional-399479            kube-system
	8f005b76f1ce3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              10 minutes ago      Running             etcd                      4                   ac7d56ebd4c71       etcd-functional-399479                      kube-system
	099afaefdab63       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                              10 minutes ago      Running             kube-apiserver            0                   2c3d9225ccdec       kube-apiserver-functional-399479            kube-system
	a9d50c80fc084       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                              10 minutes ago      Running             kube-controller-manager   4                   3e7ec3844fea7       kube-controller-manager-functional-399479   kube-system
	a5eff65a81559       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                              11 minutes ago      Exited              kube-proxy                3                   1f793cb509fc3       kube-proxy-zf8c6                            kube-system
	75f7134ab5068       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              11 minutes ago      Exited              coredns                   2                   c75ec0e67c43c       coredns-66bc5c9577-jcttb                    kube-system
	21b79b9c434ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              11 minutes ago      Exited              storage-provisioner       3                   3e977af9c71c8       storage-provisioner                         kube-system
	184dd25c9539e       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                              11 minutes ago      Exited              kube-scheduler            3                   66c8ece830920       kube-scheduler-functional-399479            kube-system
	9d9a073798f95       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                              11 minutes ago      Exited              kube-controller-manager   3                   1bd011f76ba3a       kube-controller-manager-functional-399479   kube-system
	d528a051cb407       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              11 minutes ago      Exited              etcd                      3                   a683b4bcb97ac       etcd-functional-399479                      kube-system
	
	
	==> coredns [67f1fd2b453be6ec0b757ecf35e456e0719cf02e36e04437ac365672e5d98495] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58559 - 13908 "HINFO IN 1374157520811740941.4866616976204474115. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031478639s
	
	
	==> coredns [75f7134ab506877facbe1fdbaf19d67b9941db290a03df6595e30df928f8b2ef] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41404 - 64138 "HINFO IN 6223323306420373507.4348862341396717294. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024035947s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-399479
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-399479
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=functional-399479
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T05_46_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 05:46:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-399479
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 05:59:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 05:55:45 +0000   Wed, 10 Dec 2025 05:46:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 05:55:45 +0000   Wed, 10 Dec 2025 05:46:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 05:55:45 +0000   Wed, 10 Dec 2025 05:46:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 05:55:45 +0000   Wed, 10 Dec 2025 05:46:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.97
	  Hostname:    functional-399479
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 777af56003704c57b88635b0a708362b
	  System UUID:                777af560-0370-4c57-b886-35b0a708362b
	  Boot ID:                    0e262d6b-9ed5-414c-8537-7cf53b538b6b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-x55vv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	  default                     hello-node-connect-7d85dfc575-zdq58           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-6bcdcbc558-vl4tc                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kube-system                 coredns-66bc5c9577-jcttb                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-399479                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-399479              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-399479     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-zf8c6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-399479              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-bw6lk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m26s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-d59w5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-399479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-399479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-399479 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-399479 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-399479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-399479 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-399479 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-399479 event: Registered Node functional-399479 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-399479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-399479 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-399479 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-399479 event: Registered Node functional-399479 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-399479 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-399479 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-399479 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-399479 event: Registered Node functional-399479 in Controller
	
	
	==> dmesg <==
	[  +1.184215] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090417] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108177] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.098755] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.143791] kauditd_printk_skb: 172 callbacks suppressed
	[  +0.190505] kauditd_printk_skb: 12 callbacks suppressed
	[Dec10 05:47] kauditd_printk_skb: 293 callbacks suppressed
	[  +9.732027] kauditd_printk_skb: 306 callbacks suppressed
	[Dec10 05:48] kauditd_printk_skb: 275 callbacks suppressed
	[  +1.877760] kauditd_printk_skb: 87 callbacks suppressed
	[  +1.235180] kauditd_printk_skb: 44 callbacks suppressed
	[  +0.109354] kauditd_printk_skb: 12 callbacks suppressed
	[  +1.039405] kauditd_printk_skb: 178 callbacks suppressed
	[  +4.365207] kauditd_printk_skb: 161 callbacks suppressed
	[Dec10 05:49] kauditd_printk_skb: 140 callbacks suppressed
	[  +2.039720] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000050] kauditd_printk_skb: 81 callbacks suppressed
	[  +4.979405] kauditd_printk_skb: 62 callbacks suppressed
	[  +4.826955] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.996433] kauditd_printk_skb: 61 callbacks suppressed
	[  +0.817702] kauditd_printk_skb: 107 callbacks suppressed
	[Dec10 05:50] crun[12089]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.954446] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [8f005b76f1ce30a1a58ca40282691ebbd6747c4db863f674caf025df264750ef] <==
	{"level":"warn","ts":"2025-12-10T05:48:55.531357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:55.544703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:55.559360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:55.568558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:55.628718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:49:27.384681Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.417735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-12-10T05:49:27.384924Z","caller":"traceutil/trace.go:172","msg":"trace[277153370] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:748; }","duration":"133.653229ms","start":"2025-12-10T05:49:27.251189Z","end":"2025-12-10T05:49:27.384842Z","steps":["trace[277153370] 'range keys from in-memory index tree'  (duration: 133.286393ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:49:29.757025Z","caller":"traceutil/trace.go:172","msg":"trace[158511892] linearizableReadLoop","detail":"{readStateIndex:843; appliedIndex:843; }","duration":"297.66008ms","start":"2025-12-10T05:49:29.459345Z","end":"2025-12-10T05:49:29.757005Z","steps":["trace[158511892] 'read index received'  (duration: 297.655654ms)","trace[158511892] 'applied index is now lower than readState.Index'  (duration: 3.872µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:49:29.757124Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"297.762843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:49:29.757140Z","caller":"traceutil/trace.go:172","msg":"trace[706153741] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:764; }","duration":"297.793674ms","start":"2025-12-10T05:49:29.459342Z","end":"2025-12-10T05:49:29.757135Z","steps":["trace[706153741] 'agreement among raft nodes before linearized reading'  (duration: 297.732168ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:49:29.757222Z","caller":"traceutil/trace.go:172","msg":"trace[1277509262] transaction","detail":"{read_only:false; response_revision:765; number_of_response:1; }","duration":"347.983591ms","start":"2025-12-10T05:49:29.409228Z","end":"2025-12-10T05:49:29.757212Z","steps":["trace[1277509262] 'process raft request'  (duration: 347.897412ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:49:29.757950Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T05:49:29.409210Z","time spent":"348.04999ms","remote":"127.0.0.1:46506","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:749 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-10T05:49:32.032203Z","caller":"traceutil/trace.go:172","msg":"trace[104342288] linearizableReadLoop","detail":"{readStateIndex:844; appliedIndex:844; }","duration":"248.484187ms","start":"2025-12-10T05:49:31.783697Z","end":"2025-12-10T05:49:32.032181Z","steps":["trace[104342288] 'read index received'  (duration: 248.479623ms)","trace[104342288] 'applied index is now lower than readState.Index'  (duration: 4.037µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T05:49:32.032337Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"248.755441ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:49:32.032357Z","caller":"traceutil/trace.go:172","msg":"trace[723768851] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:765; }","duration":"248.789726ms","start":"2025-12-10T05:49:31.783562Z","end":"2025-12-10T05:49:32.032352Z","steps":["trace[723768851] 'agreement among raft nodes before linearized reading'  (duration: 248.72597ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:49:32.032604Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"228.612176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:49:32.032632Z","caller":"traceutil/trace.go:172","msg":"trace[565705583] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:766; }","duration":"228.632352ms","start":"2025-12-10T05:49:31.803985Z","end":"2025-12-10T05:49:32.032617Z","steps":["trace[565705583] 'agreement among raft nodes before linearized reading'  (duration: 228.60059ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:49:32.032776Z","caller":"traceutil/trace.go:172","msg":"trace[408533512] transaction","detail":"{read_only:false; response_revision:766; number_of_response:1; }","duration":"259.585453ms","start":"2025-12-10T05:49:31.773184Z","end":"2025-12-10T05:49:32.032770Z","steps":["trace[408533512] 'process raft request'  (duration: 259.329665ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T05:49:32.032993Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.85981ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T05:49:32.033035Z","caller":"traceutil/trace.go:172","msg":"trace[1618399507] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:766; }","duration":"178.905408ms","start":"2025-12-10T05:49:31.854124Z","end":"2025-12-10T05:49:32.033029Z","steps":["trace[1618399507] 'agreement among raft nodes before linearized reading'  (duration: 178.845231ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:49:34.220152Z","caller":"traceutil/trace.go:172","msg":"trace[1795910117] transaction","detail":"{read_only:false; response_revision:767; number_of_response:1; }","duration":"173.34648ms","start":"2025-12-10T05:49:34.046793Z","end":"2025-12-10T05:49:34.220139Z","steps":["trace[1795910117] 'process raft request'  (duration: 173.215808ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:49:38.438984Z","caller":"traceutil/trace.go:172","msg":"trace[1742356454] transaction","detail":"{read_only:false; response_revision:781; number_of_response:1; }","duration":"171.364285ms","start":"2025-12-10T05:49:38.267605Z","end":"2025-12-10T05:49:38.438969Z","steps":["trace[1742356454] 'process raft request'  (duration: 171.199391ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T05:58:54.465313Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1150}
	{"level":"info","ts":"2025-12-10T05:58:54.499671Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1150,"took":"33.27662ms","hash":3636035271,"current-db-size-bytes":3600384,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1703936,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-12-10T05:58:54.499736Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3636035271,"revision":1150,"compact-revision":-1}
	
	
	==> etcd [d528a051cb407db10faa2b6eab2550403b093bbdbfcfa526543a2d790f10d2b7] <==
	{"level":"warn","ts":"2025-12-10T05:48:10.453030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:10.461079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:10.471951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:10.489324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:10.499265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:10.507363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:48:10.590730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33642","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T05:48:34.508815Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-10T05:48:34.516700Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-399479","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.97:2380"],"advertise-client-urls":["https://192.168.50.97:2379"]}
	{"level":"error","ts":"2025-12-10T05:48:34.517175Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T05:48:34.588690Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T05:48:34.590168Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T05:48:34.590233Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1f2cc3497df204b1","current-leader-member-id":"1f2cc3497df204b1"}
	{"level":"warn","ts":"2025-12-10T05:48:34.590206Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T05:48:34.590303Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T05:48:34.590315Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T05:48:34.590337Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-10T05:48:34.590387Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-10T05:48:34.590394Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.97:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T05:48:34.590416Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.97:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T05:48:34.590423Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.97:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T05:48:34.594468Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.97:2380"}
	{"level":"error","ts":"2025-12-10T05:48:34.594553Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.97:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T05:48:34.594577Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.97:2380"}
	{"level":"info","ts":"2025-12-10T05:48:34.594583Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-399479","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.97:2380"],"advertise-client-urls":["https://192.168.50.97:2379"]}
	
	
	==> kernel <==
	 05:59:23 up 13 min,  0 users,  load average: 0.26, 0.34, 0.33
	Linux functional-399479 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [099afaefdab639c10f0fea0229d6f39df0bda640b494890a88b9fcf30f00098c] <==
	I1210 05:48:56.417192       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 05:48:56.460956       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 05:48:56.464056       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 05:48:57.176999       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 05:48:58.087060       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 05:48:58.171777       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 05:48:58.236083       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 05:48:58.248290       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 05:48:59.858108       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 05:49:00.114702       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 05:49:00.215477       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 05:49:14.616352       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.224.43"}
	I1210 05:49:19.694072       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.132.113"}
	I1210 05:49:21.450904       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.200.250"}
	I1210 05:49:41.039609       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.148.88"}
	E1210 05:49:44.472046       1 conn.go:339] Error on socket receive: read tcp 192.168.50.97:8441->192.168.50.1:45482: use of closed network connection
	E1210 05:49:45.937432       1 conn.go:339] Error on socket receive: read tcp 192.168.50.97:8441->192.168.50.1:45510: use of closed network connection
	E1210 05:49:46.735076       1 conn.go:339] Error on socket receive: read tcp 192.168.50.97:8441->192.168.50.1:49836: use of closed network connection
	E1210 05:49:48.721543       1 conn.go:339] Error on socket receive: read tcp 192.168.50.97:8441->192.168.50.1:49850: use of closed network connection
	E1210 05:49:53.870275       1 conn.go:339] Error on socket receive: read tcp 192.168.50.97:8441->192.168.50.1:49864: use of closed network connection
	E1210 05:49:54.113323       1 conn.go:339] Error on socket receive: read tcp 192.168.50.97:8441->192.168.50.1:49888: use of closed network connection
	I1210 05:49:57.070494       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 05:49:57.379515       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.61.116"}
	I1210 05:49:57.423300       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.17.229"}
	I1210 05:58:56.319743       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [9d9a073798f95464023ab2f00c3c4f6673e6f656c3d1d416620e93a0f1c627ff] <==
	I1210 05:48:14.621449       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 05:48:14.622804       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 05:48:14.622839       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 05:48:14.625332       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 05:48:14.629690       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 05:48:14.632957       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 05:48:14.634170       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 05:48:14.640529       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 05:48:14.641801       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 05:48:14.648311       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 05:48:14.652835       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 05:48:14.658431       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 05:48:14.662674       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 05:48:14.662915       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 05:48:14.663002       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 05:48:14.663597       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 05:48:14.663752       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 05:48:14.663803       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 05:48:14.663841       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 05:48:14.663847       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 05:48:14.665973       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 05:48:14.666157       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 05:48:14.666651       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 05:48:14.672964       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 05:48:14.686097       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-controller-manager [a9d50c80fc084dae7743a652ba587683cd8ccb59fea82e2295b1ba3fc1eb95a6] <==
	I1210 05:48:59.800053       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 05:48:59.801185       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 05:48:59.832111       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 05:48:59.832367       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 05:48:59.834285       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 05:48:59.834370       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 05:48:59.834405       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 05:48:59.834493       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 05:48:59.834510       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 05:48:59.834525       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 05:48:59.834651       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 05:48:59.834706       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 05:48:59.834755       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 05:48:59.835070       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 05:48:59.835146       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 05:48:59.835938       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 05:48:59.851748       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1210 05:49:57.183727       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.202094       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.207419       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.209673       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.216698       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.221192       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.226012       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 05:49:57.240010       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [8c5191516b43ed9a9aac9e9e40e0db020d98c0c6ac92fe13c623d7cfc4168ea7] <==
	I1210 05:48:59.105935       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 05:48:59.213366       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 05:48:59.213418       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.97"]
	E1210 05:48:59.213480       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 05:48:59.364268       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 05:48:59.364348       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 05:48:59.364372       1 server_linux.go:132] "Using iptables Proxier"
	I1210 05:48:59.391299       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 05:48:59.392342       1 server.go:527] "Version info" version="v1.34.3"
	I1210 05:48:59.392375       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:48:59.403200       1 config.go:200] "Starting service config controller"
	I1210 05:48:59.403235       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 05:48:59.403255       1 config.go:106] "Starting endpoint slice config controller"
	I1210 05:48:59.403259       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 05:48:59.403285       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 05:48:59.403288       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 05:48:59.404075       1 config.go:309] "Starting node config controller"
	I1210 05:48:59.404106       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 05:48:59.404113       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 05:48:59.503847       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 05:48:59.503903       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 05:48:59.503939       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a5eff65a81559b983be9329bb1512bed25d3c581ad1f00ee22a500646b1a5ea3] <==
	I1210 05:48:12.016359       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 05:48:12.117553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 05:48:12.117622       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.97"]
	E1210 05:48:12.117718       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 05:48:12.188202       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 05:48:12.188929       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 05:48:12.189074       1 server_linux.go:132] "Using iptables Proxier"
	I1210 05:48:12.206243       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 05:48:12.206552       1 server.go:527] "Version info" version="v1.34.3"
	I1210 05:48:12.206659       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:48:12.217764       1 config.go:200] "Starting service config controller"
	I1210 05:48:12.217802       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 05:48:12.217820       1 config.go:106] "Starting endpoint slice config controller"
	I1210 05:48:12.217824       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 05:48:12.217834       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 05:48:12.217837       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 05:48:12.218161       1 config.go:309] "Starting node config controller"
	I1210 05:48:12.218194       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 05:48:12.218250       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 05:48:12.318731       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 05:48:12.319026       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 05:48:12.319225       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [184dd25c9539e8cb73ef1749892f9843c3a65f879d24a33add3538060f79e711] <==
	E1210 05:48:11.311223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 05:48:11.311308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:48:11.311374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 05:48:11.314461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 05:48:11.318087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 05:48:11.318480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:48:11.318574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:48:11.318670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 05:48:11.318725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 05:48:11.318773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 05:48:11.318824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 05:48:11.318938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 05:48:11.319014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:48:11.319063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:48:11.319146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:48:11.319263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 05:48:11.319340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 05:48:11.319364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1210 05:48:12.801410       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:48:34.531453       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1210 05:48:34.531499       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1210 05:48:34.531515       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1210 05:48:34.531540       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:48:34.531666       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1210 05:48:34.531704       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [5c128b1a04cbdfd6e43bfd4e0974d8e739cdae1f5e405ba6642aa6c96c4ef112] <==
	W1210 05:48:56.240480       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 05:48:56.293022       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1210 05:48:56.293069       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:48:56.306145       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 05:48:56.306266       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:48:56.306300       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 05:48:56.306315       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 05:48:56.334670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 05:48:56.336249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 05:48:56.336348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 05:48:56.347202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 05:48:56.347506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:48:56.347936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:48:56.348326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 05:48:56.349717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:48:56.349927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:48:56.351495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 05:48:56.351632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 05:48:56.351704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:48:56.351171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 05:48:56.354284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:48:56.362683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 05:48:56.363955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 05:48:56.366774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1210 05:48:56.406394       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 05:58:32 functional-399479 kubelet[7912]: E1210 05:58:32.723061    7912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765346312722430908  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:58:33 functional-399479 kubelet[7912]: E1210 05:58:33.378442    7912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-x55vv" podUID="cfee82c9-4063-410a-bbcc-2e1c3f0a5df9"
	Dec 10 05:58:35 functional-399479 kubelet[7912]: E1210 05:58:35.379953    7912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-bw6lk" podUID="8e8153f5-47af-4825-99fe-dafe709bacc8"
	Dec 10 05:58:42 functional-399479 kubelet[7912]: E1210 05:58:42.725676    7912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765346322725120659  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:58:42 functional-399479 kubelet[7912]: E1210 05:58:42.725703    7912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765346322725120659  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:58:47 functional-399479 kubelet[7912]: E1210 05:58:47.379251    7912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-bw6lk" podUID="8e8153f5-47af-4825-99fe-dafe709bacc8"
	Dec 10 05:58:48 functional-399479 kubelet[7912]: E1210 05:58:48.378512    7912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-x55vv" podUID="cfee82c9-4063-410a-bbcc-2e1c3f0a5df9"
	Dec 10 05:58:52 functional-399479 kubelet[7912]: E1210 05:58:52.473573    7912 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod07ed6ae3-922e-405d-b2a9-b6d28bb1f8ed/crio-c75ec0e67c43c2c096ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2: Error finding container c75ec0e67c43c2c096ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2: Status 404 returned error can't find the container with id c75ec0e67c43c2c096ecb1f4a4d379206d51f0d94780e569ca273b17fff01cc2
	Dec 10 05:58:52 functional-399479 kubelet[7912]: E1210 05:58:52.474451    7912 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod6e2aee2d-5391-4df0-a992-5575af56d257/crio-1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996: Error finding container 1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996: Status 404 returned error can't find the container with id 1f793cb509fc3d7422826b7482789de924621d16f12ce8cae81770f745d5a996
	Dec 10 05:58:52 functional-399479 kubelet[7912]: E1210 05:58:52.474897    7912 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod7774b15b-c3b0-40b0-8e5b-d38cffdfc273/crio-3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96: Error finding container 3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96: Status 404 returned error can't find the container with id 3e977af9c71c890b74ce839c6c3f00d2bcf0a63ccae26bdde9a12af7be6afa96
	Dec 10 05:58:52 functional-399479 kubelet[7912]: E1210 05:58:52.475418    7912 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7a1a6815645cc50cb4652da6da4d32ca/crio-66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd: Error finding container 66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd: Status 404 returned error can't find the container with id 66c8ece83092060f4e55db84c6e24afce1a9d721898b0e9ebaa999c627976cbd
	Dec 10 05:58:52 functional-399479 kubelet[7912]: E1210 05:58:52.475703    7912 manager.go:1116] Failed to create existing container: /kubepods/burstable/podce1778fed278bf50a28f60971e954135/crio-a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7: Error finding container a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7: Status 404 returned error can't find the container with id a683b4bcb97ac754c4274849b2ec011e33ee899c04694587b63c5956f9d6a4f7
	Dec 10 05:58:52 functional-399479 kubelet[7912]: E1210 05:58:52.476137    7912 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod05304dd4db39c178d9ebfffb5459860b/crio-1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff: Error finding container 1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff: Status 404 returned error can't find the container with id 1bd011f76ba3ad73263685fae4d089eb6f4db934258e492a4c88f955ca0d84ff
	Dec 10 05:58:52 functional-399479 kubelet[7912]: E1210 05:58:52.732662    7912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765346332730733385  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:58:52 functional-399479 kubelet[7912]: E1210 05:58:52.732695    7912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765346332730733385  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:59:00 functional-399479 kubelet[7912]: E1210 05:59:00.378709    7912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-x55vv" podUID="cfee82c9-4063-410a-bbcc-2e1c3f0a5df9"
	Dec 10 05:59:01 functional-399479 kubelet[7912]: E1210 05:59:01.380049    7912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-bw6lk" podUID="8e8153f5-47af-4825-99fe-dafe709bacc8"
	Dec 10 05:59:02 functional-399479 kubelet[7912]: E1210 05:59:02.735800    7912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765346342734737949  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:59:02 functional-399479 kubelet[7912]: E1210 05:59:02.735843    7912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765346342734737949  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:59:12 functional-399479 kubelet[7912]: E1210 05:59:12.738592    7912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765346352737971109  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:59:12 functional-399479 kubelet[7912]: E1210 05:59:12.738625    7912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765346352737971109  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:59:14 functional-399479 kubelet[7912]: E1210 05:59:14.380645    7912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-x55vv" podUID="cfee82c9-4063-410a-bbcc-2e1c3f0a5df9"
	Dec 10 05:59:15 functional-399479 kubelet[7912]: E1210 05:59:15.379551    7912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-bw6lk" podUID="8e8153f5-47af-4825-99fe-dafe709bacc8"
	Dec 10 05:59:22 functional-399479 kubelet[7912]: E1210 05:59:22.740565    7912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765346362740240560  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	Dec 10 05:59:22 functional-399479 kubelet[7912]: E1210 05:59:22.740587    7912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765346362740240560  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:210362}  inodes_used:{value:99}}"
	
	
	==> storage-provisioner [21b79b9c434edade68566f44e7dadb72fa7d05782ecba5d32113b8602a8c61ca] <==
	I1210 05:48:11.868458       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 05:48:11.906763       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 05:48:11.907047       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 05:48:11.914079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:15.371837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:19.632302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:23.231498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:26.286771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:29.310650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:29.321509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 05:48:29.322181       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 05:48:29.322492       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-399479_4eb6676b-7c31-4ff1-9c86-f4cf7f15733a!
	I1210 05:48:29.328567       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35171c54-a4d5-44ba-8195-e1ba30809a0f", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-399479_4eb6676b-7c31-4ff1-9c86-f4cf7f15733a became leader
	W1210 05:48:29.336133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:29.346734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 05:48:29.423539       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-399479_4eb6676b-7c31-4ff1-9c86-f4cf7f15733a!
	W1210 05:48:31.351249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:31.358112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:33.362324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:33.371482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [96cb8c02f153f2fc995c27d8026ef4e83db634bf200bbbb7a9d1b281259506ea] <==
	W1210 05:58:57.745393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:58:59.750389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:58:59.756382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:01.760524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:01.771351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:03.776141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:03.782033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:05.785643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:05.791681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:07.796710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:07.803177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:09.807243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:09.813632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:11.818306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:11.824040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:13.828077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:13.833368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:15.837265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:15.843547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:17.847189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:17.853625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:19.858598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:19.864540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:21.869020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:59:21.875431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-399479 -n functional-399479
helpers_test.go:270: (dbg) Run:  kubectl --context functional-399479 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-75c85bcc94-x55vv hello-node-connect-7d85dfc575-zdq58 dashboard-metrics-scraper-77bf4d6c4c-bw6lk kubernetes-dashboard-855c9754f9-d59w5
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-399479 describe pod busybox-mount hello-node-75c85bcc94-x55vv hello-node-connect-7d85dfc575-zdq58 dashboard-metrics-scraper-77bf4d6c4c-bw6lk kubernetes-dashboard-855c9754f9-d59w5
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-399479 describe pod busybox-mount hello-node-75c85bcc94-x55vv hello-node-connect-7d85dfc575-zdq58 dashboard-metrics-scraper-77bf4d6c4c-bw6lk kubernetes-dashboard-855c9754f9-d59w5: exit status 1 (90.59432ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-399479/192.168.50.97
	Start Time:       Wed, 10 Dec 2025 05:49:55 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.14
	IPs:
	  IP:  10.244.0.14
	Containers:
	  mount-munger:
	    Container ID:  cri-o://d112008e1180795d62e2a177aeb1d75655a06b758ffe36b7a40bfff61065aa1f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 10 Dec 2025 05:50:14 +0000
	      Finished:     Wed, 10 Dec 2025 05:50:14 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zqs4z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zqs4z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m29s  default-scheduler  Successfully assigned default/busybox-mount to functional-399479
	  Normal  Pulling    9m28s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.265s (18.085s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m10s  kubelet            Created container: mount-munger
	  Normal  Started    9m10s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-x55vv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-399479/192.168.50.97
	Start Time:       Wed, 10 Dec 2025 05:49:40 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c6zw5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c6zw5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m44s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-x55vv to functional-399479
	  Warning  Failed     9m13s                  kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m26s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m27s (x5 over 9m43s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     115s (x5 over 9m13s)   kubelet            Error: ErrImagePull
	  Warning  Failed     115s (x3 over 5m56s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     51s (x16 over 9m13s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    10s (x19 over 9m13s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-zdq58
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-399479/192.168.50.97
	Start Time:       Wed, 10 Dec 2025 05:49:21 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-djtgd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-djtgd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zdq58 to functional-399479

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-bw6lk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-d59w5" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-399479 describe pod busybox-mount hello-node-75c85bcc94-x55vv hello-node-connect-7d85dfc575-zdq58 dashboard-metrics-scraper-77bf4d6c4c-bw6lk kubernetes-dashboard-855c9754f9-d59w5: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-399479 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-399479 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-x55vv" [cfee82c9-4063-410a-bbcc-2e1c3f0a5df9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:338: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-399479 -n functional-399479
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-10 05:59:41.288491612 +0000 UTC m=+1865.339943739
functional_test.go:1460: (dbg) Run:  kubectl --context functional-399479 describe po hello-node-75c85bcc94-x55vv -n default
functional_test.go:1460: (dbg) kubectl --context functional-399479 describe po hello-node-75c85bcc94-x55vv -n default:
Name:             hello-node-75c85bcc94-x55vv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-399479/192.168.50.97
Start Time:       Wed, 10 Dec 2025 05:49:40 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c6zw5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-c6zw5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-x55vv to functional-399479
Warning  Failed     9m30s                  kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m43s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m44s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     2m12s (x5 over 9m30s)  kubelet            Error: ErrImagePull
Warning  Failed     2m12s (x3 over 6m13s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     68s (x16 over 9m30s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    5s (x21 over 9m30s)    kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-399479 logs hello-node-75c85bcc94-x55vv -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-399479 logs hello-node-75c85bcc94-x55vv -n default: exit status 1 (70.10875ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-x55vv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-399479 logs hello-node-75c85bcc94-x55vv -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399479 service --namespace=default --https --url hello-node: exit status 115 (265.255988ms)

                                                
                                                
-- stdout --
	https://192.168.50.97:31082
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-399479 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399479 service hello-node --url --format={{.IP}}: exit status 115 (257.288176ms)

                                                
                                                
-- stdout --
	192.168.50.97
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-399479 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399479 service hello-node --url: exit status 115 (264.385986ms)

                                                
                                                
-- stdout --
	http://192.168.50.97:31082
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-399479 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.50.97:31082
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (4.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-399582 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-399582 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-399582 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-399582 --alsologtostderr -v=1] stderr:
I1210 06:03:51.894148  264340 out.go:360] Setting OutFile to fd 1 ...
I1210 06:03:51.894397  264340 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:03:51.894406  264340 out.go:374] Setting ErrFile to fd 2...
I1210 06:03:51.894411  264340 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:03:51.894629  264340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
I1210 06:03:51.894948  264340 mustload.go:66] Loading cluster: functional-399582
I1210 06:03:51.895365  264340 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:03:51.897298  264340 host.go:66] Checking if "functional-399582" exists ...
I1210 06:03:51.897493  264340 api_server.go:166] Checking apiserver status ...
I1210 06:03:51.897541  264340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 06:03:51.899949  264340 main.go:143] libmachine: domain functional-399582 has defined MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:51.900325  264340 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:fe:0f", ip: ""} in network mk-functional-399582: {Iface:virbr2 ExpiryTime:2025-12-10 07:00:04 +0000 UTC Type:0 Mac:52:54:00:f3:fe:0f Iaid: IPaddr:192.168.50.120 Prefix:24 Hostname:functional-399582 Clientid:01:52:54:00:f3:fe:0f}
I1210 06:03:51.900346  264340 main.go:143] libmachine: domain functional-399582 has defined IP address 192.168.50.120 and MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:51.900467  264340 sshutil.go:53] new ssh client: &{IP:192.168.50.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399582/id_rsa Username:docker}
I1210 06:03:51.998431  264340 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7007/cgroup
W1210 06:03:52.012386  264340 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7007/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1210 06:03:52.012444  264340 ssh_runner.go:195] Run: ls
I1210 06:03:52.019645  264340 api_server.go:253] Checking apiserver healthz at https://192.168.50.120:8441/healthz ...
I1210 06:03:52.026033  264340 api_server.go:279] https://192.168.50.120:8441/healthz returned 200:
ok
W1210 06:03:52.026092  264340 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1210 06:03:52.026259  264340 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:03:52.026278  264340 addons.go:70] Setting dashboard=true in profile "functional-399582"
I1210 06:03:52.026285  264340 addons.go:239] Setting addon dashboard=true in "functional-399582"
I1210 06:03:52.026309  264340 host.go:66] Checking if "functional-399582" exists ...
I1210 06:03:52.029714  264340 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1210 06:03:52.031337  264340 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1210 06:03:52.032642  264340 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1210 06:03:52.032662  264340 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1210 06:03:52.035760  264340 main.go:143] libmachine: domain functional-399582 has defined MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:52.036225  264340 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:fe:0f", ip: ""} in network mk-functional-399582: {Iface:virbr2 ExpiryTime:2025-12-10 07:00:04 +0000 UTC Type:0 Mac:52:54:00:f3:fe:0f Iaid: IPaddr:192.168.50.120 Prefix:24 Hostname:functional-399582 Clientid:01:52:54:00:f3:fe:0f}
I1210 06:03:52.036254  264340 main.go:143] libmachine: domain functional-399582 has defined IP address 192.168.50.120 and MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:52.036395  264340 sshutil.go:53] new ssh client: &{IP:192.168.50.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399582/id_rsa Username:docker}
I1210 06:03:52.135163  264340 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1210 06:03:52.135199  264340 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1210 06:03:52.159337  264340 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1210 06:03:52.159391  264340 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1210 06:03:52.186585  264340 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1210 06:03:52.186614  264340 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1210 06:03:52.213574  264340 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1210 06:03:52.213611  264340 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1210 06:03:52.237526  264340 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1210 06:03:52.237556  264340 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1210 06:03:52.262431  264340 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1210 06:03:52.262463  264340 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1210 06:03:52.289649  264340 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1210 06:03:52.289678  264340 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1210 06:03:52.316631  264340 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1210 06:03:52.316674  264340 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1210 06:03:52.344420  264340 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1210 06:03:52.344449  264340 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1210 06:03:52.368941  264340 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1210 06:03:53.171311  264340 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-399582 addons enable metrics-server

                                                
                                                
I1210 06:03:53.172675  264340 addons.go:202] Writing out "functional-399582" config to set dashboard=true...
W1210 06:03:53.173088  264340 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1210 06:03:53.174078  264340 kapi.go:59] client config for functional-399582: &rest.Config{Host:"https://192.168.50.120:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.key", CAFile:"/home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1210 06:03:53.174760  264340 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1210 06:03:53.174789  264340 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1210 06:03:53.174797  264340 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1210 06:03:53.174806  264340 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1210 06:03:53.174813  264340 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1210 06:03:53.185020  264340 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  d33e02b9-1151-493c-99a0-83382c83b703 973 0 2025-12-10 06:03:53 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-10 06:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.100.90.246,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.100.90.246],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1210 06:03:53.185197  264340 out.go:285] * Launching proxy ...
* Launching proxy ...
I1210 06:03:53.185275  264340 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-399582 proxy --port 36195]
I1210 06:03:53.185661  264340 dashboard.go:159] Waiting for kubectl to output host:port ...
I1210 06:03:53.233664  264340 out.go:203] 
W1210 06:03:53.235020  264340 out.go:285] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W1210 06:03:53.235038  264340 out.go:285] * 
* 
W1210 06:03:53.242009  264340 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1210 06:03:53.243693  264340 out.go:203] 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-399582 -n functional-399582
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-399582 logs -n 25: (1.671270965s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-399582 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ mount     │ -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4275954490/001:/mount-9p --alsologtostderr -v=1              │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ ssh       │ functional-399582 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ ssh       │ functional-399582 ssh -- ls -la /mount-9p                                                                                                           │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ ssh       │ functional-399582 ssh cat /mount-9p/test-1765346581790005656                                                                                        │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ ssh       │ functional-399582 ssh stat /mount-9p/created-by-test                                                                                                │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ ssh       │ functional-399582 ssh stat /mount-9p/created-by-pod                                                                                                 │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ ssh       │ functional-399582 ssh sudo umount -f /mount-9p                                                                                                      │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ mount     │ -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3204528753/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ ssh       │ functional-399582 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ ssh       │ functional-399582 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ ssh       │ functional-399582 ssh -- ls -la /mount-9p                                                                                                           │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ ssh       │ functional-399582 ssh sudo umount -f /mount-9p                                                                                                      │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ mount     │ -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1928530898/001:/mount3 --alsologtostderr -v=1                │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ ssh       │ functional-399582 ssh findmnt -T /mount1                                                                                                            │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ mount     │ -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1928530898/001:/mount2 --alsologtostderr -v=1                │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ mount     │ -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1928530898/001:/mount1 --alsologtostderr -v=1                │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ ssh       │ functional-399582 ssh findmnt -T /mount1                                                                                                            │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ ssh       │ functional-399582 ssh findmnt -T /mount2                                                                                                            │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ ssh       │ functional-399582 ssh findmnt -T /mount3                                                                                                            │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ mount     │ -p functional-399582 --kill=true                                                                                                                    │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ start     │ -p functional-399582 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1           │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ start     │ -p functional-399582 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                     │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ start     │ -p functional-399582 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1           │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-399582 --alsologtostderr -v=1                                                                                      │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:03:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:03:51.766799  264309 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:03:51.767087  264309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:03:51.767097  264309 out.go:374] Setting ErrFile to fd 2...
	I1210 06:03:51.767101  264309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:03:51.767389  264309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 06:03:51.767816  264309 out.go:368] Setting JSON to false
	I1210 06:03:51.768682  264309 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27979,"bootTime":1765318653,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:03:51.768770  264309 start.go:143] virtualization: kvm guest
	I1210 06:03:51.770686  264309 out.go:179] * [functional-399582] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 06:03:51.772002  264309 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:03:51.772030  264309 notify.go:221] Checking for updates...
	I1210 06:03:51.774318  264309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:03:51.775690  264309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 06:03:51.776963  264309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 06:03:51.778488  264309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:03:51.780172  264309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:03:51.782707  264309 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:03:51.783277  264309 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:03:51.816250  264309 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1210 06:03:51.817678  264309 start.go:309] selected driver: kvm2
	I1210 06:03:51.817698  264309 start.go:927] validating driver "kvm2" against &{Name:functional-399582 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-399582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.120 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:03:51.817811  264309 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:03:51.819986  264309 out.go:203] 
	W1210 06:03:51.821327  264309 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 06:03:51.822701  264309 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.538777970Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a13e8e8-338d-4943-94b5-035e9bc09a16 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.539119908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:350c6d4bb4039b4a3fd8595f77a258de3a8ff8f1a96864099c6a73a6754260e9,PodSandboxId:7a993bbe441183f0bf97d23ca6dbcb78acacf0bde685a13465efd52b01a09ead,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765346630782898986,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b288f573-315c-487e-972f-525794180d08,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ac9989d94c01bc39b2f41658799119dd112b19f141f601945be966c3a02809,PodSandboxId:f528bb56b9b49acca32d606bd68956fd448122fb545a21f95623fcd28d5760cf,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765346625210836509,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 735ed8f9-562b-402d-a721-a403dd9bf390,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338bc49c99044ee9a046d1af4f7a27f1d97bd42edbba5b9da4a932c48135272e,PodSandboxId:805e789926c8e36bea92ce378548f5eee400c13a9503bc90ead2d95b8f082132,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765346617787382429,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-k8mpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d6d84736-d96b-4bb0-9ace-fe6ef83567c2,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9290df2a81574e8780c2eca1758d81da038c03db07a7d7f65c207c7ef65e6f62,PodSandboxId:f0b953c47b3c14fc52563375bacd418c5d8c14c7385432688d6ba8613e49a847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765346547591023112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab242a4c-35c9-4fef-9a3a-5c1e1717225e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kub
ernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d295ca1de581b5ba2b941672dc00e5ec3001032e9e93c0623a4da21f6fe608,PodSandboxId:88c94a4a47516a0d2eb8315e1292867f40e00c5e3afcc18b76cda2aad4743c5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765346547604214417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-2j4gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a5bd55-8567-4987-8b52-a8afdde6d923,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b1d362125b3092fb66e0b16178c53c7c2f11abf3edb2e509aa2589e946b1be,PodSandboxId:30886e95d556e56b7a68fd7f082d67495f0c49e41924466ee4fa5785f2a8ad72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,
CreatedAt:1765346544954433088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddd915c1115cab07e79ed04626784ed,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f34bcb44a6a4e1791c2070a389a50fa785c25b3922cd9c5ce900f335b98e99,PodSandboxId:2e6503bc41d2f26fb57c8e3aacb8770886c9faf2c7db52de27e040d25452a29f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765346542843696434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820864d487c46dbb13dc25752422a6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5cb4f1b5b0dbd284b012c6ca448e3be248dbbd5b23099728012529c8ed8d01,PodSandboxId:a49e5dbc4ecb85f32390dcebe91a585e244dea76d26c29d77cb0c81ed930af99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a5
6602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765346542659318388,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19188f1a520cc8cda23c4f36263cc30e,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f6bd2d4e02a8641c13f3c829d7ea7b8a267f5ef2d586d0ca0145fe73d582fc,PodSandboxId:b402d7218b1dbe08157047e846
e520d2bf0d1fa9131088e7c73e47689a3b6a91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765346542531721071,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7v74x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4707d892-af08-4059-a693-2e05840d221e,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d45a8de7df26224509ea8aff4ab6a397ba984d567fab8eebc405943f747fefc,PodSandboxId:68e573695195509c03de612062f2299de9178824dc9b40c9aead1077ad8
ab3e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765346542411017008,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201b80fd997ce091bd5c05d64ceda09d,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b71a11752f1091e04da2f0bf7705df7d486a2cebe5a86ee1cb28beeb7a
dbd32,PodSandboxId:c3b6fd9071eb20056bf229cf26a14132ffaf00bdb7a4163be60ad3574586911d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765346513588844815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab242a4c-35c9-4fef-9a3a-5c1e1717225e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9d6629ac789255c9295ae5b0442910062b0d5332b2af1e6f526070b4e657dd,PodSand
boxId:a240334f39d4d729afc0cc953c9778703ebe062f2151aafb18393ed85473f0d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765346502868903350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7v74x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4707d892-af08-4059-a693-2e05840d221e,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33548d557bb40b0552c7f8275cf35d645d41237e817e85704f3fcfbda0c10621,PodSandboxId:09ac8ad73ee77e6723a39c67219b
93bb19b65cd2e7ab3ef335fa9a1711876d37,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765346500193692022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201b80fd997ce091bd5c05d64ceda09d,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc895e0e74f8d61d1ab39b2b340b4
0b857cb1e26f1f2148da431abb016debcd9,PodSandboxId:3dce0aebedba977e373e41879db1b1582294d8eb5ea34a9062628691a5714a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765346500226693912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19188f1a520cc8cda23c4f36263cc30e,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dacbe5b37ec930f2eceb458267c8984cfb952746601de7f1eca23f7abff458,PodSandboxId:e653eccbaa11b141c0bdd046377d2b552a89babd7e9b32c029cbfb54a6533c98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765346500196235747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820864d487c46dbb13dc25752422a6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernet
es.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f657dd69967dd32d4153bdf62bcf28987f2a83e1094875810b3f59ef66ae862d,PodSandboxId:28dccbf2db19449ef67eb9744865901966ddfcaf24d6e9b34e0a7fddb0160a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765346497944427046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-2j4gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a5bd55-8567-4987-8b52-a8afdde6d923,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a13e8e8-338d-4943-94b5-035e9bc09a16 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.544741253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a87587d-7333-42df-a82d-2da63fa4f6ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.544821646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a87587d-7333-42df-a82d-2da63fa4f6ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.545146918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:350c6d4bb4039b4a3fd8595f77a258de3a8ff8f1a96864099c6a73a6754260e9,PodSandboxId:7a993bbe441183f0bf97d23ca6dbcb78acacf0bde685a13465efd52b01a09ead,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765346630782898986,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b288f573-315c-487e-972f-525794180d08,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ac9989d94c01bc39b2f41658799119dd112b19f141f601945be966c3a02809,PodSandboxId:f528bb56b9b49acca32d606bd68956fd448122fb545a21f95623fcd28d5760cf,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765346625210836509,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 735ed8f9-562b-402d-a721-a403dd9bf390,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338bc49c99044ee9a046d1af4f7a27f1d97bd42edbba5b9da4a932c48135272e,PodSandboxId:805e789926c8e36bea92ce378548f5eee400c13a9503bc90ead2d95b8f082132,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765346617787382429,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-k8mpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d6d84736-d96b-4bb0-9ace-fe6ef83567c2,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9290df2a81574e8780c2eca1758d81da038c03db07a7d7f65c207c7ef65e6f62,PodSandboxId:f0b953c47b3c14fc52563375bacd418c5d8c14c7385432688d6ba8613e49a847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765346547591023112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab242a4c-35c9-4fef-9a3a-5c1e1717225e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kub
ernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d295ca1de581b5ba2b941672dc00e5ec3001032e9e93c0623a4da21f6fe608,PodSandboxId:88c94a4a47516a0d2eb8315e1292867f40e00c5e3afcc18b76cda2aad4743c5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765346547604214417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-2j4gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a5bd55-8567-4987-8b52-a8afdde6d923,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b1d362125b3092fb66e0b16178c53c7c2f11abf3edb2e509aa2589e946b1be,PodSandboxId:30886e95d556e56b7a68fd7f082d67495f0c49e41924466ee4fa5785f2a8ad72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,
CreatedAt:1765346544954433088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddd915c1115cab07e79ed04626784ed,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f34bcb44a6a4e1791c2070a389a50fa785c25b3922cd9c5ce900f335b98e99,PodSandboxId:2e6503bc41d2f26fb57c8e3aacb8770886c9faf2c7db52de27e040d25452a29f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765346542843696434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820864d487c46dbb13dc25752422a6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5cb4f1b5b0dbd284b012c6ca448e3be248dbbd5b23099728012529c8ed8d01,PodSandboxId:a49e5dbc4ecb85f32390dcebe91a585e244dea76d26c29d77cb0c81ed930af99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a5
6602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765346542659318388,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19188f1a520cc8cda23c4f36263cc30e,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f6bd2d4e02a8641c13f3c829d7ea7b8a267f5ef2d586d0ca0145fe73d582fc,PodSandboxId:b402d7218b1dbe08157047e846
e520d2bf0d1fa9131088e7c73e47689a3b6a91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765346542531721071,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7v74x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4707d892-af08-4059-a693-2e05840d221e,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d45a8de7df26224509ea8aff4ab6a397ba984d567fab8eebc405943f747fefc,PodSandboxId:68e573695195509c03de612062f2299de9178824dc9b40c9aead1077ad8
ab3e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765346542411017008,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201b80fd997ce091bd5c05d64ceda09d,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b71a11752f1091e04da2f0bf7705df7d486a2cebe5a86ee1cb28beeb7a
dbd32,PodSandboxId:c3b6fd9071eb20056bf229cf26a14132ffaf00bdb7a4163be60ad3574586911d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765346513588844815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab242a4c-35c9-4fef-9a3a-5c1e1717225e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9d6629ac789255c9295ae5b0442910062b0d5332b2af1e6f526070b4e657dd,PodSand
boxId:a240334f39d4d729afc0cc953c9778703ebe062f2151aafb18393ed85473f0d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765346502868903350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7v74x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4707d892-af08-4059-a693-2e05840d221e,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33548d557bb40b0552c7f8275cf35d645d41237e817e85704f3fcfbda0c10621,PodSandboxId:09ac8ad73ee77e6723a39c67219b
93bb19b65cd2e7ab3ef335fa9a1711876d37,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765346500193692022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201b80fd997ce091bd5c05d64ceda09d,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc895e0e74f8d61d1ab39b2b340b4
0b857cb1e26f1f2148da431abb016debcd9,PodSandboxId:3dce0aebedba977e373e41879db1b1582294d8eb5ea34a9062628691a5714a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765346500226693912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19188f1a520cc8cda23c4f36263cc30e,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dacbe5b37ec930f2eceb458267c8984cfb952746601de7f1eca23f7abff458,PodSandboxId:e653eccbaa11b141c0bdd046377d2b552a89babd7e9b32c029cbfb54a6533c98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765346500196235747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820864d487c46dbb13dc25752422a6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernet
es.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f657dd69967dd32d4153bdf62bcf28987f2a83e1094875810b3f59ef66ae862d,PodSandboxId:28dccbf2db19449ef67eb9744865901966ddfcaf24d6e9b34e0a7fddb0160a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765346497944427046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-2j4gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a5bd55-8567-4987-8b52-a8afdde6d923,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a87587d-7333-42df-a82d-2da63fa4f6ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.546584291Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:350c6d4bb4039b4a3fd8595f77a258de3a8ff8f1a96864099c6a73a6754260e9,Verbose:false,}" file="otel-collector/interceptors.go:62" id=50b0266e-d3ef-4d2a-af83-b3f185528b78 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.546919228Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:350c6d4bb4039b4a3fd8595f77a258de3a8ff8f1a96864099c6a73a6754260e9,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1765346630828842426,StartedAt:1765346630872135478,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx:alpine,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b288f573-315c-487e-972f-525794180d08,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp/mount,HostPath:/tmp/hostpath-provisioner/default/myclaim,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b288f573-315c-487e-972f-525794180d08/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b288f573-315c-487e-972f-525794180d08/containers/myfrontend/3f946903,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/b288f573-315c-487e-972f-525794180d08/volumes/kubernetes.io~projected/kube-api-access-jk9l4,Readonly:true,SelinuxRelabel:false,Propaga
tion:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/default_sp-pod_b288f573-315c-487e-972f-525794180d08/myfrontend/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=50b0266e-d3ef-4d2a-af83-b3f185528b78 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.547668279Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:338bc49c99044ee9a046d1af4f7a27f1d97bd42edbba5b9da4a932c48135272e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8753523c-1c59-4290-ad17-e3051479e968 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.547820175Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:338bc49c99044ee9a046d1af4f7a27f1d97bd42edbba5b9da4a932c48135272e,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1765346617822039488,StartedAt:1765346617853579973,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql:8.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-k8mpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d6d84736-d96b-4bb0-9ace-fe6ef83567c2,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d6d84736-d96b-4bb0-9ace-fe6ef83567c2/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d6d84736-d96b-4bb0-9ace-fe6ef83567c2/containers/mysql/ba560d7d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/d6d84736-d96b-4bb0-9ace-fe6ef83567c2/volumes/kubernetes.io~projected/kube-api-access-v5ljq,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/po
ds/default_mysql-7d7b65bc95-k8mpc_d6d84736-d96b-4bb0-9ace-fe6ef83567c2/mysql/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:70000,CpuShares:614,MemoryLimitInBytes:734003200,OomScoreAdj:869,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:734003200,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=8753523c-1c59-4290-ad17-e3051479e968 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.548929534Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9290df2a81574e8780c2eca1758d81da038c03db07a7d7f65c207c7ef65e6f62,Verbose:false,}" file="otel-collector/interceptors.go:62" id=20933cdf-6222-4ff4-ae51-0a54092cbb92 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.549234214Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9290df2a81574e8780c2eca1758d81da038c03db07a7d7f65c207c7ef65e6f62,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},State:CONTAINER_RUNNING,CreatedAt:1765346547651392709,StartedAt:1765346547694801884,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab242a4c-35c9-4fef-9a3a-5c1e1717225e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/ab242a4c-35c9-4fef-9a3a-5c1e1717225e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/ab242a4c-35c9-4fef-9a3a-5c1e1717225e/containers/storage-provisioner/0771b522,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/ab242a4c-35c9-4fef-9a3a-5c1e1717225e/volumes/kubernetes.io~projected/kube-api-access-ljzj9,Readonly:true,SelinuxRelabel:fal
se,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_ab242a4c-35c9-4fef-9a3a-5c1e1717225e/storage-provisioner/5.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=20933cdf-6222-4ff4-ae51-0a54092cbb92 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.549959051Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:01d295ca1de581b5ba2b941672dc00e5ec3001032e9e93c0623a4da21f6fe608,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ea040b3e-d443-4879-8bf9-690d6157ec1a name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.550051097Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:01d295ca1de581b5ba2b941672dc00e5ec3001032e9e93c0623a4da21f6fe608,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},State:CONTAINER_RUNNING,CreatedAt:1765346547647795118,StartedAt:1765346547684719649,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.13.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-2j4gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a5bd55-8567-4987-8b52-a8afdde6d923,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/43a5bd55-8567-4987-8b52-a8afdde6d923/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/43a5bd55-8567-4987-8b52-a8afdde6d923/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/terminati
on-log,HostPath:/var/lib/kubelet/pods/43a5bd55-8567-4987-8b52-a8afdde6d923/containers/coredns/a96df48f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/43a5bd55-8567-4987-8b52-a8afdde6d923/volumes/kubernetes.io~projected/kube-api-access-4jrft,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7d764666f9-2j4gx_43a5bd55-8567-4987-8b52-a8afdde6d923/coredns/3.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:983,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[st
ring]string{},}" file="otel-collector/interceptors.go:74" id=ea040b3e-d443-4879-8bf9-690d6157ec1a name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.550149593Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=6bc07e27-b3f7-4df5-a8c0-0a60ee589e34 name=/runtime.v1.RuntimeService/Status
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.550183737Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=6bc07e27-b3f7-4df5-a8c0-0a60ee589e34 name=/runtime.v1.RuntimeService/Status
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.551012259Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:15b1d362125b3092fb66e0b16178c53c7c2f11abf3edb2e509aa2589e946b1be,Verbose:false,}" file="otel-collector/interceptors.go:62" id=cc35a82f-7a62-4a8b-950b-b17432f8c4e9 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.551139211Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:15b1d362125b3092fb66e0b16178c53c7c2f11abf3edb2e509aa2589e946b1be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1765346545000712752,StartedAt:1765346545064135439,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.35.0-rc.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddd915c1115cab07e79ed04626784ed,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":844
1,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0ddd915c1115cab07e79ed04626784ed/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0ddd915c1115cab07e79ed04626784ed/containers/kube-apiserver/655322c0,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:t
rue,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-functional-399582_0ddd915c1115cab07e79ed04626784ed/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=cc35a82f-7a62-4a8b-950b-b17432f8c4e9 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.551858578Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:54f34bcb44a6a4e1791c2070a389a50fa785c25b3922cd9c5ce900f335b98e99,Verbose:false,}" file="otel-collector/interceptors.go:62" id=aeea618d-4fd1-4348-ab6d-78fb24ab8226 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.551961526Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:54f34bcb44a6a4e1791c2070a389a50fa785c25b3922cd9c5ce900f335b98e99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},State:CONTAINER_RUNNING,CreatedAt:1765346543077087920,StartedAt:1765346544819700282,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.35.0-rc.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820864d487c46dbb13dc25752422a6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10
259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/820864d487c46dbb13dc25752422a6c0/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/820864d487c46dbb13dc25752422a6c0/containers/kube-scheduler/30e20f68,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-functiona
l-399582_820864d487c46dbb13dc25752422a6c0/kube-scheduler/3.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=aeea618d-4fd1-4348-ab6d-78fb24ab8226 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.552829666Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:6c5cb4f1b5b0dbd284b012c6ca448e3be248dbbd5b23099728012529c8ed8d01,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8a798e67-29a1-42de-a7d4-5ae5d6c0ab02 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.552953833Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:6c5cb4f1b5b0dbd284b012c6ca448e3be248dbbd5b23099728012529c8ed8d01,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},State:CONTAINER_RUNNING,CreatedAt:1765346542782505100,StartedAt:1765346543043155892,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.35.0-rc.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19188f1a520cc8cda23c4f36263cc30e,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"h
ostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/19188f1a520cc8cda23c4f36263cc30e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/19188f1a520cc8cda23c4f36263cc30e/containers/kube-controller-manager/8da11dc7,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller
-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-functional-399582_19188f1a520cc8cda23c4f36263cc30e/kube-controller-manager/3.log,Resources:&ContainerResources{L
inux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=8a798e67-29a1-42de-a7d4-5ae5d6c0ab02 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.553236767Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:00f6bd2d4e02a8641c13f3c829d7ea7b8a267f5ef2d586d0ca0145fe73d582fc,Verbose:false,}" file="otel-collector/interceptors.go:62" id=92981cce-8a46-4ad6-a6c5-f75216ae0853 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.553500192Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:00f6bd2d4e02a8641c13f3c829d7ea7b8a267f5ef2d586d0ca0145fe73d582fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},State:CONTAINER_RUNNING,CreatedAt:1765346542770664946,StartedAt:1765346542949431263,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.35.0-rc.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7v74x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4707d892-af08-4059-a693-2e05840d221e,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/4707d892-af08-4059-a693-2e05840d221e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/4707d892-af08-4059-a693-2e05840d221e/containers/kube-proxy/ff25cd26,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath
:/var/lib/kubelet/pods/4707d892-af08-4059-a693-2e05840d221e/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/4707d892-af08-4059-a693-2e05840d221e/volumes/kubernetes.io~projected/kube-api-access-tn5q9,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-7v74x_4707d892-af08-4059-a693-2e05840d221e/kube-proxy/3.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="
otel-collector/interceptors.go:74" id=92981cce-8a46-4ad6-a6c5-f75216ae0853 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.554495716Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7d45a8de7df26224509ea8aff4ab6a397ba984d567fab8eebc405943f747fefc,Verbose:false,}" file="otel-collector/interceptors.go:62" id=2d0ad0c0-1baf-4da6-9187-07c0f7d71860 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 06:03:54 functional-399582 crio[5970]: time="2025-12-10 06:03:54.554951935Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7d45a8de7df26224509ea8aff4ab6a397ba984d567fab8eebc405943f747fefc,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},State:CONTAINER_RUNNING,CreatedAt:1765346542621497043,StartedAt:1765346542781386963,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.6.6-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201b80fd997ce091bd5c05d64ceda09d,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.contai
ner.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/201b80fd997ce091bd5c05d64ceda09d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/201b80fd997ce091bd5c05d64ceda09d/containers/etcd/a968c8fa,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagat
ion:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-functional-399582_201b80fd997ce091bd5c05d64ceda09d/etcd/3.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=2d0ad0c0-1baf-4da6-9187-07c0f7d71860 name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	350c6d4bb4039       d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9                                              3 seconds ago        Running             myfrontend                0                   7a993bbe44118       sp-pod                                      default
	22ac9989d94c0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           9 seconds ago        Exited              mount-munger              0                   f528bb56b9b49       busybox-mount                               default
	338bc49c99044       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   16 seconds ago       Running             mysql                     0                   805e789926c8e       mysql-7d7b65bc95-k8mpc                      default
	01d295ca1de58       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              About a minute ago   Running             coredns                   3                   88c94a4a47516       coredns-7d764666f9-2j4gx                    kube-system
	9290df2a81574       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              About a minute ago   Running             storage-provisioner       5                   f0b953c47b3c1       storage-provisioner                         kube-system
	15b1d362125b3       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                              About a minute ago   Running             kube-apiserver            0                   30886e95d556e       kube-apiserver-functional-399582            kube-system
	54f34bcb44a6a       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                              About a minute ago   Running             kube-scheduler            3                   2e6503bc41d2f       kube-scheduler-functional-399582            kube-system
	6c5cb4f1b5b0d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                              About a minute ago   Running             kube-controller-manager   3                   a49e5dbc4ecb8       kube-controller-manager-functional-399582   kube-system
	00f6bd2d4e02a       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                              About a minute ago   Running             kube-proxy                3                   b402d7218b1db       kube-proxy-7v74x                            kube-system
	7d45a8de7df26       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                              About a minute ago   Running             etcd                      3                   68e5736951955       etcd-functional-399582                      kube-system
	2b71a11752f10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              2 minutes ago        Exited              storage-provisioner       4                   c3b6fd9071eb2       storage-provisioner                         kube-system
	8c9d6629ac789       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                              2 minutes ago        Exited              kube-proxy                2                   a240334f39d4d       kube-proxy-7v74x                            kube-system
	dc895e0e74f8d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                              2 minutes ago        Exited              kube-controller-manager   2                   3dce0aebedba9       kube-controller-manager-functional-399582   kube-system
	34dacbe5b37ec       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                              2 minutes ago        Exited              kube-scheduler            2                   e653eccbaa11b       kube-scheduler-functional-399582            kube-system
	33548d557bb40       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                              2 minutes ago        Exited              etcd                      2                   09ac8ad73ee77       etcd-functional-399582                      kube-system
	f657dd69967dd       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              2 minutes ago        Exited              coredns                   2                   28dccbf2db194       coredns-7d764666f9-2j4gx                    kube-system
	
	
	==> coredns [01d295ca1de581b5ba2b941672dc00e5ec3001032e9e93c0623a4da21f6fe608] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53666 - 39596 "HINFO IN 9075107278779861811.4103704832365595779. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.153919535s
	
	
	==> coredns [f657dd69967dd32d4153bdf62bcf28987f2a83e1094875810b3f59ef66ae862d] <==
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38741 - 41185 "HINFO IN 4468328331559939092.7401650893011338442. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028651389s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-399582
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-399582
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=functional-399582
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_00_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:00:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-399582
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:03:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:03:28 +0000   Wed, 10 Dec 2025 06:00:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:03:28 +0000   Wed, 10 Dec 2025 06:00:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:03:28 +0000   Wed, 10 Dec 2025 06:00:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:03:28 +0000   Wed, 10 Dec 2025 06:00:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.120
	  Hostname:    functional-399582
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 4561933152674b80bf8a351184459707
	  System UUID:                45619331-5267-4b80-bf8a-351184459707
	  Boot ID:                    465e07fa-e16f-45c3-9894-63f201e0bd6a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-j9h2z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  default                     hello-node-connect-9f67c86d4-ks5cc            0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     mysql-7d7b65bc95-k8mpc                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    62s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 coredns-7d764666f9-2j4gx                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m23s
	  kube-system                 etcd-functional-399582                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m29s
	  kube-system                 kube-apiserver-functional-399582              250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-functional-399582     200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 kube-proxy-7v74x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 kube-scheduler-functional-399582              100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-nqcvt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-psrc4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  3m24s  node-controller  Node functional-399582 event: Registered Node functional-399582 in Controller
	  Normal  RegisteredNode  2m28s  node-controller  Node functional-399582 event: Registered Node functional-399582 in Controller
	  Normal  RegisteredNode  2m9s   node-controller  Node functional-399582 event: Registered Node functional-399582 in Controller
	  Normal  RegisteredNode  85s    node-controller  Node functional-399582 event: Registered Node functional-399582 in Controller
	
	
	==> dmesg <==
	[Dec10 06:00] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.212041] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084806] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.112029] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.130538] kauditd_printk_skb: 199 callbacks suppressed
	[  +0.000239] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.649001] kauditd_printk_skb: 249 callbacks suppressed
	[Dec10 06:01] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.111759] kauditd_printk_skb: 11 callbacks suppressed
	[  +1.929289] kauditd_printk_skb: 328 callbacks suppressed
	[  +5.527938] kauditd_printk_skb: 26 callbacks suppressed
	[  +3.622908] kauditd_printk_skb: 98 callbacks suppressed
	[  +7.397086] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 06:02] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.110469] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.640447] kauditd_printk_skb: 78 callbacks suppressed
	[  +3.353148] kauditd_printk_skb: 290 callbacks suppressed
	[ +18.397540] kauditd_printk_skb: 36 callbacks suppressed
	[  +0.093117] kauditd_printk_skb: 112 callbacks suppressed
	[  +0.000039] kauditd_printk_skb: 53 callbacks suppressed
	[Dec10 06:03] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.456310] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.041754] kauditd_printk_skb: 48 callbacks suppressed
	
	
	==> etcd [33548d557bb40b0552c7f8275cf35d645d41237e817e85704f3fcfbda0c10621] <==
	{"level":"info","ts":"2025-12-10T06:01:40.728199Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:01:40.729118Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:01:40.738662Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:01:40.740136Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:01:40.740180Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:01:40.745985Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T06:01:40.751451Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.120:2379"}
	{"level":"info","ts":"2025-12-10T06:02:07.356669Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-10T06:02:07.359714Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-399582","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.120:2380"],"advertise-client-urls":["https://192.168.50.120:2379"]}
	{"level":"error","ts":"2025-12-10T06:02:07.375647Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T06:02:07.452960Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T06:02:07.454643Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-10T06:02:07.454973Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T06:02:07.455228Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T06:02:07.455499Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T06:02:07.455116Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4380e8ffa07dad0","current-leader-member-id":"d4380e8ffa07dad0"}
	{"level":"warn","ts":"2025-12-10T06:02:07.455188Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.120:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T06:02:07.455603Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.120:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T06:02:07.455612Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.120:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T06:02:07.455659Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-10T06:02:07.455677Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-10T06:02:07.459518Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.120:2380"}
	{"level":"error","ts":"2025-12-10T06:02:07.459591Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.120:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T06:02:07.459614Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.120:2380"}
	{"level":"info","ts":"2025-12-10T06:02:07.459619Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-399582","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.120:2380"],"advertise-client-urls":["https://192.168.50.120:2379"]}
	
	
	==> etcd [7d45a8de7df26224509ea8aff4ab6a397ba984d567fab8eebc405943f747fefc] <==
	{"level":"info","ts":"2025-12-10T06:02:23.276359Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:02:23.276393Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:02:23.278668Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T06:03:28.390111Z","caller":"traceutil/trace.go:172","msg":"trace[1029140176] transaction","detail":"{read_only:false; response_revision:868; number_of_response:1; }","duration":"158.045196ms","start":"2025-12-10T06:03:28.231958Z","end":"2025-12-10T06:03:28.390003Z","steps":["trace[1029140176] 'process raft request'  (duration: 156.009786ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:03:29.729823Z","caller":"traceutil/trace.go:172","msg":"trace[309646084] linearizableReadLoop","detail":"{readStateIndex:955; appliedIndex:955; }","duration":"135.121838ms","start":"2025-12-10T06:03:29.594685Z","end":"2025-12-10T06:03:29.729807Z","steps":["trace[309646084] 'read index received'  (duration: 135.116607ms)","trace[309646084] 'applied index is now lower than readState.Index'  (duration: 4.624µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T06:03:29.730014Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.26309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:03:29.730084Z","caller":"traceutil/trace.go:172","msg":"trace[934611356] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:869; }","duration":"135.395784ms","start":"2025-12-10T06:03:29.594681Z","end":"2025-12-10T06:03:29.730077Z","steps":["trace[934611356] 'agreement among raft nodes before linearized reading'  (duration: 135.22309ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:03:29.732511Z","caller":"traceutil/trace.go:172","msg":"trace[1988418188] transaction","detail":"{read_only:false; response_revision:870; number_of_response:1; }","duration":"152.04767ms","start":"2025-12-10T06:03:29.580450Z","end":"2025-12-10T06:03:29.732498Z","steps":["trace[1988418188] 'process raft request'  (duration: 149.665685ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:03:32.184689Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.068897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:03:32.184758Z","caller":"traceutil/trace.go:172","msg":"trace[1516461909] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:871; }","duration":"204.147737ms","start":"2025-12-10T06:03:31.980600Z","end":"2025-12-10T06:03:32.184747Z","steps":["trace[1516461909] 'range keys from in-memory index tree'  (duration: 204.017724ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:03:32.184917Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"208.507306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:03:32.184985Z","caller":"traceutil/trace.go:172","msg":"trace[1367031098] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:871; }","duration":"208.527207ms","start":"2025-12-10T06:03:31.976401Z","end":"2025-12-10T06:03:32.184929Z","steps":["trace[1367031098] 'range keys from in-memory index tree'  (duration: 208.440076ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:03:32.185598Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"188.573308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:03:32.186337Z","caller":"traceutil/trace.go:172","msg":"trace[239063490] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:871; }","duration":"189.40771ms","start":"2025-12-10T06:03:31.996900Z","end":"2025-12-10T06:03:32.186308Z","steps":["trace[239063490] 'range keys from in-memory index tree'  (duration: 187.691756ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:03:34.083869Z","caller":"traceutil/trace.go:172","msg":"trace[558704773] transaction","detail":"{read_only:false; response_revision:872; number_of_response:1; }","duration":"214.439426ms","start":"2025-12-10T06:03:33.869417Z","end":"2025-12-10T06:03:34.083856Z","steps":["trace[558704773] 'process raft request'  (duration: 214.272956ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:03:34.084791Z","caller":"traceutil/trace.go:172","msg":"trace[1316996738] linearizableReadLoop","detail":"{readStateIndex:958; appliedIndex:958; }","duration":"108.118952ms","start":"2025-12-10T06:03:33.976659Z","end":"2025-12-10T06:03:34.084778Z","steps":["trace[1316996738] 'read index received'  (duration: 106.960887ms)","trace[1316996738] 'applied index is now lower than readState.Index'  (duration: 3.769µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T06:03:34.084906Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.260327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:03:34.084928Z","caller":"traceutil/trace.go:172","msg":"trace[1302307117] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:872; }","duration":"108.291985ms","start":"2025-12-10T06:03:33.976629Z","end":"2025-12-10T06:03:34.084921Z","steps":["trace[1302307117] 'agreement among raft nodes before linearized reading'  (duration: 108.239758ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:03:34.085077Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.888434ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:03:34.085117Z","caller":"traceutil/trace.go:172","msg":"trace[859508988] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:872; }","duration":"105.928914ms","start":"2025-12-10T06:03:33.979182Z","end":"2025-12-10T06:03:34.085111Z","steps":["trace[859508988] 'agreement among raft nodes before linearized reading'  (duration: 105.87746ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:03:36.641621Z","caller":"traceutil/trace.go:172","msg":"trace[285473757] linearizableReadLoop","detail":"{readStateIndex:959; appliedIndex:959; }","duration":"328.226725ms","start":"2025-12-10T06:03:36.313376Z","end":"2025-12-10T06:03:36.641603Z","steps":["trace[285473757] 'read index received'  (duration: 328.221955ms)","trace[285473757] 'applied index is now lower than readState.Index'  (duration: 4.08µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T06:03:36.641759Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"328.373379ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:03:36.641801Z","caller":"traceutil/trace.go:172","msg":"trace[776500912] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:872; }","duration":"328.428571ms","start":"2025-12-10T06:03:36.313365Z","end":"2025-12-10T06:03:36.641794Z","steps":["trace[776500912] 'agreement among raft nodes before linearized reading'  (duration: 328.342852ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:03:36.642700Z","caller":"traceutil/trace.go:172","msg":"trace[463181425] transaction","detail":"{read_only:false; response_revision:873; number_of_response:1; }","duration":"530.748137ms","start":"2025-12-10T06:03:36.111941Z","end":"2025-12-10T06:03:36.642689Z","steps":["trace[463181425] 'process raft request'  (duration: 530.387731ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:03:36.643189Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T06:03:36.111920Z","time spent":"530.806841ms","remote":"127.0.0.1:46456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:872 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 06:03:55 up 4 min,  0 users,  load average: 2.21, 1.06, 0.42
	Linux functional-399582 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [15b1d362125b3092fb66e0b16178c53c7c2f11abf3edb2e509aa2589e946b1be] <==
	I1210 06:02:26.889240       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:02:26.889255       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:02:26.889348       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:02:26.929367       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:02:27.390736       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:02:27.697098       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:02:28.662765       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:02:28.724196       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:02:28.767987       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:02:28.776874       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:02:29.854631       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:02:30.052589       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:02:47.295902       1 alloc.go:329] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.111.132"}
	I1210 06:02:51.836462       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:02:51.963935       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.81.189"}
	I1210 06:02:52.612177       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.154.107"}
	E1210 06:03:46.020463       1 conn.go:339] Error on socket receive: read tcp 192.168.50.120:8441->192.168.50.1:58956: use of closed network connection
	E1210 06:03:47.645067       1 conn.go:339] Error on socket receive: read tcp 192.168.50.120:8441->192.168.50.1:38392: use of closed network connection
	E1210 06:03:48.800925       1 conn.go:339] Error on socket receive: read tcp 192.168.50.120:8441->192.168.50.1:38422: use of closed network connection
	E1210 06:03:50.750885       1 conn.go:339] Error on socket receive: read tcp 192.168.50.120:8441->192.168.50.1:38454: use of closed network connection
	I1210 06:03:52.779925       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:03:53.114452       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.90.246"}
	I1210 06:03:53.153470       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.243.139"}
	E1210 06:03:53.500768       1 conn.go:339] Error on socket receive: read tcp 192.168.50.120:8441->192.168.50.1:38478: use of closed network connection
	I1210 06:03:53.715584       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.241.71"}
	
	
	==> kube-controller-manager [6c5cb4f1b5b0dbd284b012c6ca448e3be248dbbd5b23099728012529c8ed8d01] <==
	I1210 06:02:29.813987       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.814096       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.814122       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.819585       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.831376       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.859735       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.863993       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.865202       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.865411       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.865440       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.866067       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.866235       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.866489       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.873095       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.873108       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:02:29.873112       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:02:29.878549       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.908705       1 shared_informer.go:377] "Caches are synced"
	E1210 06:03:52.901570       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 06:03:52.918491       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 06:03:52.930744       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 06:03:52.945711       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 06:03:52.953566       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 06:03:52.960160       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 06:03:52.960307       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [dc895e0e74f8d61d1ab39b2b340b40b857cb1e26f1f2148da431abb016debcd9] <==
	I1210 06:01:45.546440       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.548544       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1210 06:01:45.549322       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:01:45.549365       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.546459       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.546466       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.546472       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.546483       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.546685       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.550963       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1210 06:01:45.551409       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-399582"
	I1210 06:01:45.551472       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1210 06:01:45.546953       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.546961       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.551593       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.551646       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.551669       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.548023       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.541885       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.602522       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.647481       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.647537       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.647543       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:01:45.647547       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:01:45.957439       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [00f6bd2d4e02a8641c13f3c829d7ea7b8a267f5ef2d586d0ca0145fe73d582fc] <==
	I1210 06:02:23.233045       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:02:27.135719       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:27.135761       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.120"]
	E1210 06:02:27.135872       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:02:27.173126       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 06:02:27.173199       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 06:02:27.173221       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:02:27.184408       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:02:27.184827       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1210 06:02:27.184871       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:02:27.189828       1 config.go:200] "Starting service config controller"
	I1210 06:02:27.189881       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:02:27.189909       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:02:27.189924       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:02:27.189945       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:02:27.189958       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:02:27.190769       1 config.go:309] "Starting node config controller"
	I1210 06:02:27.190811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:02:27.190827       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:02:27.290017       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:02:27.290060       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:02:27.290095       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [8c9d6629ac789255c9295ae5b0442910062b0d5332b2af1e6f526070b4e657dd] <==
	I1210 06:01:43.130899       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:01:43.231750       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:43.231846       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.120"]
	E1210 06:01:43.231918       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:01:43.284185       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 06:01:43.284368       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 06:01:43.284394       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:01:43.294955       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:01:43.296355       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1210 06:01:43.296416       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:01:43.303620       1 config.go:200] "Starting service config controller"
	I1210 06:01:43.303677       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:01:43.303694       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:01:43.303698       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:01:43.303710       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:01:43.303723       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:01:43.306467       1 config.go:309] "Starting node config controller"
	I1210 06:01:43.306788       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:01:43.306819       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:01:43.404750       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:01:43.404809       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:01:43.405192       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [34dacbe5b37ec930f2eceb458267c8984cfb952746601de7f1eca23f7abff458] <==
	I1210 06:01:40.977506       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:01:42.287034       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:01:42.287140       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:01:42.287151       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:01:42.287157       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:01:42.400699       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1210 06:01:42.400795       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:01:42.414606       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:01:42.415148       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:01:42.416387       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:01:42.415162       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:01:42.517587       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:07.379186       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1210 06:02:07.381172       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1210 06:02:07.384358       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1210 06:02:07.384534       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [54f34bcb44a6a4e1791c2070a389a50fa785c25b3922cd9c5ce900f335b98e99] <==
	I1210 06:02:25.191251       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:02:26.759517       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:02:26.760339       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:02:26.760394       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:02:26.760412       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:02:26.813752       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1210 06:02:26.813859       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:02:26.827377       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:02:26.827471       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:02:26.827509       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:02:26.827667       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:02:26.928312       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:03:47 functional-399582 kubelet[6885]: I1210 06:03:47.913882    6885 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rl6qv\" (UniqueName: \"kubernetes.io/projected/735ed8f9-562b-402d-a721-a403dd9bf390-kube-api-access-rl6qv\") on node \"functional-399582\" DevicePath \"\""
	Dec 10 06:03:47 functional-399582 kubelet[6885]: I1210 06:03:47.913911    6885 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/735ed8f9-562b-402d-a721-a403dd9bf390-test-volume\") on node \"functional-399582\" DevicePath \"\""
	Dec 10 06:03:48 functional-399582 kubelet[6885]: I1210 06:03:48.395379    6885 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f528bb56b9b49acca32d606bd68956fd448122fb545a21f95623fcd28d5760cf"
	Dec 10 06:03:48 functional-399582 kubelet[6885]: E1210 06:03:48.801819    6885 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37014->127.0.0.1:34999: write tcp 127.0.0.1:37014->127.0.0.1:34999: write: broken pipe
	Dec 10 06:03:49 functional-399582 kubelet[6885]: I1210 06:03:49.409803    6885 scope.go:122] "RemoveContainer" containerID="6b1f549a85506d6f393c7c7dfb8ec3811a6fa97a2a81bc4af65279731f2fa517"
	Dec 10 06:03:49 functional-399582 kubelet[6885]: I1210 06:03:49.424609    6885 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d-kube-api-access-srrz7\" (UniqueName: \"kubernetes.io/projected/cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d-kube-api-access-srrz7\") pod \"cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d\" (UID: \"cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d\") "
	Dec 10 06:03:49 functional-399582 kubelet[6885]: I1210 06:03:49.424648    6885 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d-pvc-bcc15eaa-771a-44f6-8d6f-458b6d9feeaf\" (UniqueName: \"kubernetes.io/host-path/cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d-pvc-bcc15eaa-771a-44f6-8d6f-458b6d9feeaf\") pod \"cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d\" (UID: \"cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d\") "
	Dec 10 06:03:49 functional-399582 kubelet[6885]: I1210 06:03:49.424717    6885 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d-pvc-bcc15eaa-771a-44f6-8d6f-458b6d9feeaf" pod "cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d" (UID: "cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d"). InnerVolumeSpecName "pvc-bcc15eaa-771a-44f6-8d6f-458b6d9feeaf". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 10 06:03:49 functional-399582 kubelet[6885]: I1210 06:03:49.429489    6885 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d-kube-api-access-srrz7" pod "cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d" (UID: "cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d"). InnerVolumeSpecName "kube-api-access-srrz7". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 10 06:03:49 functional-399582 kubelet[6885]: I1210 06:03:49.524791    6885 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-srrz7\" (UniqueName: \"kubernetes.io/projected/cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d-kube-api-access-srrz7\") on node \"functional-399582\" DevicePath \"\""
	Dec 10 06:03:49 functional-399582 kubelet[6885]: I1210 06:03:49.524916    6885 reconciler_common.go:299] "Volume detached for volume \"pvc-bcc15eaa-771a-44f6-8d6f-458b6d9feeaf\" (UniqueName: \"kubernetes.io/host-path/cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d-pvc-bcc15eaa-771a-44f6-8d6f-458b6d9feeaf\") on node \"functional-399582\" DevicePath \"\""
	Dec 10 06:03:49 functional-399582 kubelet[6885]: I1210 06:03:49.531662    6885 scope.go:122] "RemoveContainer" containerID="6b1f549a85506d6f393c7c7dfb8ec3811a6fa97a2a81bc4af65279731f2fa517"
	Dec 10 06:03:49 functional-399582 kubelet[6885]: E1210 06:03:49.532662    6885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b1f549a85506d6f393c7c7dfb8ec3811a6fa97a2a81bc4af65279731f2fa517\": container with ID starting with 6b1f549a85506d6f393c7c7dfb8ec3811a6fa97a2a81bc4af65279731f2fa517 not found: ID does not exist" containerID="6b1f549a85506d6f393c7c7dfb8ec3811a6fa97a2a81bc4af65279731f2fa517"
	Dec 10 06:03:49 functional-399582 kubelet[6885]: I1210 06:03:49.532692    6885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1f549a85506d6f393c7c7dfb8ec3811a6fa97a2a81bc4af65279731f2fa517"} err="failed to get container status \"6b1f549a85506d6f393c7c7dfb8ec3811a6fa97a2a81bc4af65279731f2fa517\": rpc error: code = NotFound desc = could not find container \"6b1f549a85506d6f393c7c7dfb8ec3811a6fa97a2a81bc4af65279731f2fa517\": container with ID starting with 6b1f549a85506d6f393c7c7dfb8ec3811a6fa97a2a81bc4af65279731f2fa517 not found: ID does not exist"
	Dec 10 06:03:50 functional-399582 kubelet[6885]: I1210 06:03:50.030422    6885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk9l4\" (UniqueName: \"kubernetes.io/projected/b288f573-315c-487e-972f-525794180d08-kube-api-access-jk9l4\") pod \"sp-pod\" (UID: \"b288f573-315c-487e-972f-525794180d08\") " pod="default/sp-pod"
	Dec 10 06:03:50 functional-399582 kubelet[6885]: I1210 06:03:50.030497    6885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bcc15eaa-771a-44f6-8d6f-458b6d9feeaf\" (UniqueName: \"kubernetes.io/host-path/b288f573-315c-487e-972f-525794180d08-pvc-bcc15eaa-771a-44f6-8d6f-458b6d9feeaf\") pod \"sp-pod\" (UID: \"b288f573-315c-487e-972f-525794180d08\") " pod="default/sp-pod"
	Dec 10 06:03:50 functional-399582 kubelet[6885]: I1210 06:03:50.382998    6885 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d" path="/var/lib/kubelet/pods/cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d/volumes"
	Dec 10 06:03:52 functional-399582 kubelet[6885]: I1210 06:03:52.987659    6885 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=3.987643363 podStartE2EDuration="3.987643363s" podCreationTimestamp="2025-12-10 06:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:03:51.462793708 +0000 UTC m=+87.297468180" watchObservedRunningTime="2025-12-10 06:03:52.987643363 +0000 UTC m=+88.822317863"
	Dec 10 06:03:53 functional-399582 kubelet[6885]: I1210 06:03:53.160084    6885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7b81838e-8ec2-46e6-bfad-8491e28f1898-tmp-volume\") pod \"kubernetes-dashboard-b84665fb8-psrc4\" (UID: \"7b81838e-8ec2-46e6-bfad-8491e28f1898\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-psrc4"
	Dec 10 06:03:53 functional-399582 kubelet[6885]: I1210 06:03:53.160381    6885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8gz4\" (UniqueName: \"kubernetes.io/projected/7b81838e-8ec2-46e6-bfad-8491e28f1898-kube-api-access-q8gz4\") pod \"kubernetes-dashboard-b84665fb8-psrc4\" (UID: \"7b81838e-8ec2-46e6-bfad-8491e28f1898\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-psrc4"
	Dec 10 06:03:53 functional-399582 kubelet[6885]: I1210 06:03:53.160476    6885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77fhq\" (UniqueName: \"kubernetes.io/projected/2b5b8546-f5df-45fc-aa4b-23e2669e7550-kube-api-access-77fhq\") pod \"dashboard-metrics-scraper-5565989548-nqcvt\" (UID: \"2b5b8546-f5df-45fc-aa4b-23e2669e7550\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-nqcvt"
	Dec 10 06:03:53 functional-399582 kubelet[6885]: I1210 06:03:53.160538    6885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2b5b8546-f5df-45fc-aa4b-23e2669e7550-tmp-volume\") pod \"dashboard-metrics-scraper-5565989548-nqcvt\" (UID: \"2b5b8546-f5df-45fc-aa4b-23e2669e7550\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-nqcvt"
	Dec 10 06:03:53 functional-399582 kubelet[6885]: I1210 06:03:53.764854    6885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8bgp\" (UniqueName: \"kubernetes.io/projected/b0bd85fb-560c-49b9-98cf-8d7c2db9b6bf-kube-api-access-k8bgp\") pod \"hello-node-connect-9f67c86d4-ks5cc\" (UID: \"b0bd85fb-560c-49b9-98cf-8d7c2db9b6bf\") " pod="default/hello-node-connect-9f67c86d4-ks5cc"
	Dec 10 06:03:54 functional-399582 kubelet[6885]: E1210 06:03:54.632872    6885 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765346634632091378  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:216484}  inodes_used:{value:97}}"
	Dec 10 06:03:54 functional-399582 kubelet[6885]: E1210 06:03:54.632914    6885 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765346634632091378  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:216484}  inodes_used:{value:97}}"
	
	
	==> storage-provisioner [2b71a11752f1091e04da2f0bf7705df7d486a2cebe5a86ee1cb28beeb7adbd32] <==
	I1210 06:01:53.668408       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:01:53.680228       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:01:53.680412       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:01:53.683844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:01:57.139855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:02:01.400220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:02:04.999500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9290df2a81574e8780c2eca1758d81da038c03db07a7d7f65c207c7ef65e6f62] <==
	W1210 06:03:29.735393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:31.751778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:31.861384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:33.865994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:34.091434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:36.102352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:36.651230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:38.655594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:38.661126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:40.665908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:40.678986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:42.684451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:42.691513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:44.696635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:44.727589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:46.735917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:46.749810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:48.783587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:48.821035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:50.826669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:50.837828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:52.844987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:52.851183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:54.875507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:03:54.915858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-399582 -n functional-399582
helpers_test.go:270: (dbg) Run:  kubectl --context functional-399582 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-j9h2z hello-node-connect-9f67c86d4-ks5cc dashboard-metrics-scraper-5565989548-nqcvt kubernetes-dashboard-b84665fb8-psrc4
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-399582 describe pod busybox-mount hello-node-5758569b79-j9h2z hello-node-connect-9f67c86d4-ks5cc dashboard-metrics-scraper-5565989548-nqcvt kubernetes-dashboard-b84665fb8-psrc4
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-399582 describe pod busybox-mount hello-node-5758569b79-j9h2z hello-node-connect-9f67c86d4-ks5cc dashboard-metrics-scraper-5565989548-nqcvt kubernetes-dashboard-b84665fb8-psrc4: exit status 1 (91.965903ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-399582/192.168.50.120
	Start Time:       Wed, 10 Dec 2025 06:03:02 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://22ac9989d94c01bc39b2f41658799119dd112b19f141f601945be966c3a02809
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 10 Dec 2025 06:03:45 +0000
	      Finished:     Wed, 10 Dec 2025 06:03:45 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rl6qv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-rl6qv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  53s   default-scheduler  Successfully assigned default/busybox-mount to functional-399582
	  Normal  Pulling    52s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.464s (41.619s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10s   kubelet            Container created
	  Normal  Started    10s   kubelet            Container started
	
	
	Name:             hello-node-5758569b79-j9h2z
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-399582/192.168.50.120
	Start Time:       Wed, 10 Dec 2025 06:02:51 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rnbfz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rnbfz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  64s                default-scheduler  Successfully assigned default/hello-node-5758569b79-j9h2z to functional-399582
	  Warning  Failed     33s                kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     33s                kubelet            Error: ErrImagePull
	  Normal   BackOff    33s                kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     33s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    17s (x2 over 63s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-ks5cc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-399582/192.168.50.120
	Start Time:       Wed, 10 Dec 2025 06:03:53 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k8bgp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k8bgp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-ks5cc to functional-399582
	  Normal  Pulling    1s    kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-nqcvt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-psrc4" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-399582 describe pod busybox-mount hello-node-5758569b79-j9h2z hello-node-connect-9f67c86d4-ks5cc dashboard-metrics-scraper-5565989548-nqcvt kubernetes-dashboard-b84665fb8-psrc4: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (4.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (602.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-399582 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-399582 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-ks5cc" [b0bd85fb-560c-49b9-98cf-8d7c2db9b6bf] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-399582 -n functional-399582
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-10 06:13:53.959791632 +0000 UTC m=+2718.011243763
functional_test.go:1645: (dbg) Run:  kubectl --context functional-399582 describe po hello-node-connect-9f67c86d4-ks5cc -n default
functional_test.go:1645: (dbg) kubectl --context functional-399582 describe po hello-node-connect-9f67c86d4-ks5cc -n default:
Name:             hello-node-connect-9f67c86d4-ks5cc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-399582/192.168.50.120
Start Time:       Wed, 10 Dec 2025 06:03:53 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.14
IPs:
IP:           10.244.0.14
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k8bgp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-k8bgp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-ks5cc to functional-399582
Normal   Pulling    2m21s (x4 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     31s (x4 over 8m3s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     31s (x4 over 8m3s)   kubelet            Error: ErrImagePull
Normal   BackOff    7s (x6 over 8m2s)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     7s (x6 over 8m2s)    kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-399582 logs hello-node-connect-9f67c86d4-ks5cc -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-399582 logs hello-node-connect-9f67c86d4-ks5cc -n default: exit status 1 (73.308574ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-ks5cc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-399582 logs hello-node-connect-9f67c86d4-ks5cc -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-399582 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-ks5cc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-399582/192.168.50.120
Start Time:       Wed, 10 Dec 2025 06:03:53 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.14
IPs:
IP:           10.244.0.14
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k8bgp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-k8bgp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-ks5cc to functional-399582
Normal   Pulling    2m21s (x4 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     31s (x4 over 8m3s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     31s (x4 over 8m3s)   kubelet            Error: ErrImagePull
Normal   BackOff    7s (x6 over 8m2s)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     7s (x6 over 8m2s)    kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-399582 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-399582 logs -l app=hello-node-connect: exit status 1 (66.399684ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-ks5cc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-399582 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-399582 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.241.71
IPs:                      10.97.241.71
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32418/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-399582 -n functional-399582
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-399582 logs -n 25: (1.415627508s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                   ARGS                                                                    │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1928530898/001:/mount2 --alsologtostderr -v=1      │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ mount          │ -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1928530898/001:/mount1 --alsologtostderr -v=1      │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ ssh            │ functional-399582 ssh findmnt -T /mount1                                                                                                  │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ ssh            │ functional-399582 ssh findmnt -T /mount2                                                                                                  │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ ssh            │ functional-399582 ssh findmnt -T /mount3                                                                                                  │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ mount          │ -p functional-399582 --kill=true                                                                                                          │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ start          │ -p functional-399582 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ start          │ -p functional-399582 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1           │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ start          │ -p functional-399582 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-399582 --alsologtostderr -v=1                                                                            │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ update-context │ functional-399582 update-context --alsologtostderr -v=2                                                                                   │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ update-context │ functional-399582 update-context --alsologtostderr -v=2                                                                                   │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ update-context │ functional-399582 update-context --alsologtostderr -v=2                                                                                   │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ image          │ functional-399582 image ls --format short --alsologtostderr                                                                               │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ image          │ functional-399582 image ls --format yaml --alsologtostderr                                                                                │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ ssh            │ functional-399582 ssh pgrep buildkitd                                                                                                     │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │                     │
	│ image          │ functional-399582 image build -t localhost/my-image:functional-399582 testdata/build --alsologtostderr                                    │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ image          │ functional-399582 image ls --format json --alsologtostderr                                                                                │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ image          │ functional-399582 image ls --format table --alsologtostderr                                                                               │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ image          │ functional-399582 image ls                                                                                                                │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:03 UTC │ 10 Dec 25 06:03 UTC │
	│ service        │ functional-399582 service list                                                                                                            │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │ 10 Dec 25 06:12 UTC │
	│ service        │ functional-399582 service list -o json                                                                                                    │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │ 10 Dec 25 06:12 UTC │
	│ service        │ functional-399582 service --namespace=default --https --url hello-node                                                                    │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ service        │ functional-399582 service hello-node --url --format={{.IP}}                                                                               │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ service        │ functional-399582 service hello-node --url                                                                                                │ functional-399582 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:03:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:03:51.766799  264309 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:03:51.767087  264309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:03:51.767097  264309 out.go:374] Setting ErrFile to fd 2...
	I1210 06:03:51.767101  264309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:03:51.767389  264309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 06:03:51.767816  264309 out.go:368] Setting JSON to false
	I1210 06:03:51.768682  264309 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27979,"bootTime":1765318653,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:03:51.768770  264309 start.go:143] virtualization: kvm guest
	I1210 06:03:51.770686  264309 out.go:179] * [functional-399582] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 06:03:51.772002  264309 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:03:51.772030  264309 notify.go:221] Checking for updates...
	I1210 06:03:51.774318  264309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:03:51.775690  264309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 06:03:51.776963  264309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 06:03:51.778488  264309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:03:51.780172  264309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:03:51.782707  264309 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:03:51.783277  264309 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:03:51.816250  264309 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1210 06:03:51.817678  264309 start.go:309] selected driver: kvm2
	I1210 06:03:51.817698  264309 start.go:927] validating driver "kvm2" against &{Name:functional-399582 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-399582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.120 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:03:51.817811  264309 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:03:51.819986  264309 out.go:203] 
	W1210 06:03:51.821327  264309 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 06:03:51.822701  264309 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 06:13:54 functional-399582 crio[5970]: time="2025-12-10 06:13:54.984927729Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765347234984900533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242143,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=232dc8d3-73ef-4ed1-b9de-8ba6ed3d3439 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:13:54 functional-399582 crio[5970]: time="2025-12-10 06:13:54.986443798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3507d3d-8644-427f-99c8-639347edd2bc name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:13:54 functional-399582 crio[5970]: time="2025-12-10 06:13:54.986512856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3507d3d-8644-427f-99c8-639347edd2bc name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:13:54 functional-399582 crio[5970]: time="2025-12-10 06:13:54.986795647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:350c6d4bb4039b4a3fd8595f77a258de3a8ff8f1a96864099c6a73a6754260e9,PodSandboxId:7a993bbe441183f0bf97d23ca6dbcb78acacf0bde685a13465efd52b01a09ead,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765346630782898986,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b288f573-315c-487e-972f-525794180d08,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ac9989d94c01bc39b2f41658799119dd112b19f141f601945be966c3a02809,PodSandboxId:f528bb56b9b49acca32d606bd68956fd448122fb545a21f95623fcd28d5760cf,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765346625210836509,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 735ed8f9-562b-402d-a721-a403dd9bf390,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338bc49c99044ee9a046d1af4f7a27f1d97bd42edbba5b9da4a932c48135272e,PodSandboxId:805e789926c8e36bea92ce378548f5eee400c13a9503bc90ead2d95b8f082132,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765346617787382429,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-k8mpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d6d84736-d96b-4bb0-9ace-fe6ef83567c2,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9290df2a81574e8780c2eca1758d81da038c03db07a7d7f65c207c7ef65e6f62,PodSandboxId:f0b953c47b3c14fc52563375bacd418c5d8c14c7385432688d6ba8613e49a847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765346547591023112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab242a4c-35c9-4fef-9a3a-5c1e1717225e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kub
ernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d295ca1de581b5ba2b941672dc00e5ec3001032e9e93c0623a4da21f6fe608,PodSandboxId:88c94a4a47516a0d2eb8315e1292867f40e00c5e3afcc18b76cda2aad4743c5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765346547604214417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-2j4gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a5bd55-8567-4987-8b52-a8afdde6d923,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b1d362125b3092fb66e0b16178c53c7c2f11abf3edb2e509aa2589e946b1be,PodSandboxId:30886e95d556e56b7a68fd7f082d67495f0c49e41924466ee4fa5785f2a8ad72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,
CreatedAt:1765346544954433088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddd915c1115cab07e79ed04626784ed,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f34bcb44a6a4e1791c2070a389a50fa785c25b3922cd9c5ce900f335b98e99,PodSandboxId:2e6503bc41d2f26fb57c8e3aacb8770886c9faf2c7db52de27e040d25452a29f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765346542843696434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820864d487c46dbb13dc25752422a6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5cb4f1b5b0dbd284b012c6ca448e3be248dbbd5b23099728012529c8ed8d01,PodSandboxId:a49e5dbc4ecb85f32390dcebe91a585e244dea76d26c29d77cb0c81ed930af99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a5
6602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765346542659318388,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19188f1a520cc8cda23c4f36263cc30e,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f6bd2d4e02a8641c13f3c829d7ea7b8a267f5ef2d586d0ca0145fe73d582fc,PodSandboxId:b402d7218b1dbe08157047e846
e520d2bf0d1fa9131088e7c73e47689a3b6a91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765346542531721071,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7v74x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4707d892-af08-4059-a693-2e05840d221e,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d45a8de7df26224509ea8aff4ab6a397ba984d567fab8eebc405943f747fefc,PodSandboxId:68e573695195509c03de612062f2299de9178824dc9b40c9aead1077ad8
ab3e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765346542411017008,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201b80fd997ce091bd5c05d64ceda09d,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b71a11752f1091e04da2f0bf7705df7d486a2cebe5a86ee1cb28beeb7a
dbd32,PodSandboxId:c3b6fd9071eb20056bf229cf26a14132ffaf00bdb7a4163be60ad3574586911d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765346513588844815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab242a4c-35c9-4fef-9a3a-5c1e1717225e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9d6629ac789255c9295ae5b0442910062b0d5332b2af1e6f526070b4e657dd,PodSand
boxId:a240334f39d4d729afc0cc953c9778703ebe062f2151aafb18393ed85473f0d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765346502868903350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7v74x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4707d892-af08-4059-a693-2e05840d221e,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33548d557bb40b0552c7f8275cf35d645d41237e817e85704f3fcfbda0c10621,PodSandboxId:09ac8ad73ee77e6723a39c67219b
93bb19b65cd2e7ab3ef335fa9a1711876d37,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765346500193692022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201b80fd997ce091bd5c05d64ceda09d,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc895e0e74f8d61d1ab39b2b340b4
0b857cb1e26f1f2148da431abb016debcd9,PodSandboxId:3dce0aebedba977e373e41879db1b1582294d8eb5ea34a9062628691a5714a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765346500226693912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19188f1a520cc8cda23c4f36263cc30e,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dacbe5b37ec930f2eceb458267c8984cfb952746601de7f1eca23f7abff458,PodSandboxId:e653eccbaa11b141c0bdd046377d2b552a89babd7e9b32c029cbfb54a6533c98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765346500196235747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820864d487c46dbb13dc25752422a6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernet
es.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f657dd69967dd32d4153bdf62bcf28987f2a83e1094875810b3f59ef66ae862d,PodSandboxId:28dccbf2db19449ef67eb9744865901966ddfcaf24d6e9b34e0a7fddb0160a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765346497944427046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-2j4gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a5bd55-8567-4987-8b52-a8afdde6d923,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3507d3d-8644-427f-99c8-639347edd2bc name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.030831291Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20f132b2-7217-4a12-ab24-c0b82b155117 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.030971148Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20f132b2-7217-4a12-ab24-c0b82b155117 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.033086043Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71644a2c-efb1-4762-b1de-27ff7980ad9c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.034066522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765347235034041743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242143,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71644a2c-efb1-4762-b1de-27ff7980ad9c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.035120776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9edac40-5468-40b3-8708-08d19fcf6d73 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.035230993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9edac40-5468-40b3-8708-08d19fcf6d73 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.035632045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:350c6d4bb4039b4a3fd8595f77a258de3a8ff8f1a96864099c6a73a6754260e9,PodSandboxId:7a993bbe441183f0bf97d23ca6dbcb78acacf0bde685a13465efd52b01a09ead,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765346630782898986,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b288f573-315c-487e-972f-525794180d08,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ac9989d94c01bc39b2f41658799119dd112b19f141f601945be966c3a02809,PodSandboxId:f528bb56b9b49acca32d606bd68956fd448122fb545a21f95623fcd28d5760cf,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765346625210836509,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 735ed8f9-562b-402d-a721-a403dd9bf390,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338bc49c99044ee9a046d1af4f7a27f1d97bd42edbba5b9da4a932c48135272e,PodSandboxId:805e789926c8e36bea92ce378548f5eee400c13a9503bc90ead2d95b8f082132,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765346617787382429,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-k8mpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d6d84736-d96b-4bb0-9ace-fe6ef83567c2,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9290df2a81574e8780c2eca1758d81da038c03db07a7d7f65c207c7ef65e6f62,PodSandboxId:f0b953c47b3c14fc52563375bacd418c5d8c14c7385432688d6ba8613e49a847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765346547591023112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab242a4c-35c9-4fef-9a3a-5c1e1717225e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kub
ernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d295ca1de581b5ba2b941672dc00e5ec3001032e9e93c0623a4da21f6fe608,PodSandboxId:88c94a4a47516a0d2eb8315e1292867f40e00c5e3afcc18b76cda2aad4743c5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765346547604214417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-2j4gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a5bd55-8567-4987-8b52-a8afdde6d923,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b1d362125b3092fb66e0b16178c53c7c2f11abf3edb2e509aa2589e946b1be,PodSandboxId:30886e95d556e56b7a68fd7f082d67495f0c49e41924466ee4fa5785f2a8ad72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,
CreatedAt:1765346544954433088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddd915c1115cab07e79ed04626784ed,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f34bcb44a6a4e1791c2070a389a50fa785c25b3922cd9c5ce900f335b98e99,PodSandboxId:2e6503bc41d2f26fb57c8e3aacb8770886c9faf2c7db52de27e040d25452a29f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765346542843696434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820864d487c46dbb13dc25752422a6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5cb4f1b5b0dbd284b012c6ca448e3be248dbbd5b23099728012529c8ed8d01,PodSandboxId:a49e5dbc4ecb85f32390dcebe91a585e244dea76d26c29d77cb0c81ed930af99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a5
6602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765346542659318388,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19188f1a520cc8cda23c4f36263cc30e,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f6bd2d4e02a8641c13f3c829d7ea7b8a267f5ef2d586d0ca0145fe73d582fc,PodSandboxId:b402d7218b1dbe08157047e846
e520d2bf0d1fa9131088e7c73e47689a3b6a91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765346542531721071,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7v74x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4707d892-af08-4059-a693-2e05840d221e,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d45a8de7df26224509ea8aff4ab6a397ba984d567fab8eebc405943f747fefc,PodSandboxId:68e573695195509c03de612062f2299de9178824dc9b40c9aead1077ad8
ab3e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765346542411017008,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201b80fd997ce091bd5c05d64ceda09d,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b71a11752f1091e04da2f0bf7705df7d486a2cebe5a86ee1cb28beeb7a
dbd32,PodSandboxId:c3b6fd9071eb20056bf229cf26a14132ffaf00bdb7a4163be60ad3574586911d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765346513588844815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab242a4c-35c9-4fef-9a3a-5c1e1717225e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9d6629ac789255c9295ae5b0442910062b0d5332b2af1e6f526070b4e657dd,PodSand
boxId:a240334f39d4d729afc0cc953c9778703ebe062f2151aafb18393ed85473f0d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765346502868903350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7v74x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4707d892-af08-4059-a693-2e05840d221e,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33548d557bb40b0552c7f8275cf35d645d41237e817e85704f3fcfbda0c10621,PodSandboxId:09ac8ad73ee77e6723a39c67219b
93bb19b65cd2e7ab3ef335fa9a1711876d37,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765346500193692022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201b80fd997ce091bd5c05d64ceda09d,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc895e0e74f8d61d1ab39b2b340b4
0b857cb1e26f1f2148da431abb016debcd9,PodSandboxId:3dce0aebedba977e373e41879db1b1582294d8eb5ea34a9062628691a5714a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765346500226693912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19188f1a520cc8cda23c4f36263cc30e,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dacbe5b37ec930f2eceb458267c8984cfb952746601de7f1eca23f7abff458,PodSandboxId:e653eccbaa11b141c0bdd046377d2b552a89babd7e9b32c029cbfb54a6533c98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765346500196235747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820864d487c46dbb13dc25752422a6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernet
es.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f657dd69967dd32d4153bdf62bcf28987f2a83e1094875810b3f59ef66ae862d,PodSandboxId:28dccbf2db19449ef67eb9744865901966ddfcaf24d6e9b34e0a7fddb0160a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765346497944427046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-2j4gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a5bd55-8567-4987-8b52-a8afdde6d923,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9edac40-5468-40b3-8708-08d19fcf6d73 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.068564960Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b88f1f23-a836-4178-9993-5ee49138a9c7 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.068663445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b88f1f23-a836-4178-9993-5ee49138a9c7 name=/runtime.v1.RuntimeService/Version
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.070056281Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47fb7c6e-7583-4c53-8a11-94185acba502 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.070918408Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765347235070857426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242143,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47fb7c6e-7583-4c53-8a11-94185acba502 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.071927204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ec42839-6bbd-42cc-9d7b-d21bbcdfdae2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.072006020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ec42839-6bbd-42cc-9d7b-d21bbcdfdae2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.072361551Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:350c6d4bb4039b4a3fd8595f77a258de3a8ff8f1a96864099c6a73a6754260e9,PodSandboxId:7a993bbe441183f0bf97d23ca6dbcb78acacf0bde685a13465efd52b01a09ead,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765346630782898986,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b288f573-315c-487e-972f-525794180d08,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ac9989d94c01bc39b2f41658799119dd112b19f141f601945be966c3a02809,PodSandboxId:f528bb56b9b49acca32d606bd68956fd448122fb545a21f95623fcd28d5760cf,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765346625210836509,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 735ed8f9-562b-402d-a721-a403dd9bf390,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338bc49c99044ee9a046d1af4f7a27f1d97bd42edbba5b9da4a932c48135272e,PodSandboxId:805e789926c8e36bea92ce378548f5eee400c13a9503bc90ead2d95b8f082132,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765346617787382429,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-k8mpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d6d84736-d96b-4bb0-9ace-fe6ef83567c2,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9290df2a81574e8780c2eca1758d81da038c03db07a7d7f65c207c7ef65e6f62,PodSandboxId:f0b953c47b3c14fc52563375bacd418c5d8c14c7385432688d6ba8613e49a847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765346547591023112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab242a4c-35c9-4fef-9a3a-5c1e1717225e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kub
ernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d295ca1de581b5ba2b941672dc00e5ec3001032e9e93c0623a4da21f6fe608,PodSandboxId:88c94a4a47516a0d2eb8315e1292867f40e00c5e3afcc18b76cda2aad4743c5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765346547604214417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-2j4gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a5bd55-8567-4987-8b52-a8afdde6d923,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b1d362125b3092fb66e0b16178c53c7c2f11abf3edb2e509aa2589e946b1be,PodSandboxId:30886e95d556e56b7a68fd7f082d67495f0c49e41924466ee4fa5785f2a8ad72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,
CreatedAt:1765346544954433088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddd915c1115cab07e79ed04626784ed,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f34bcb44a6a4e1791c2070a389a50fa785c25b3922cd9c5ce900f335b98e99,PodSandboxId:2e6503bc41d2f26fb57c8e3aacb8770886c9faf2c7db52de27e040d25452a29f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765346542843696434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820864d487c46dbb13dc25752422a6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5cb4f1b5b0dbd284b012c6ca448e3be248dbbd5b23099728012529c8ed8d01,PodSandboxId:a49e5dbc4ecb85f32390dcebe91a585e244dea76d26c29d77cb0c81ed930af99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a5
6602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765346542659318388,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19188f1a520cc8cda23c4f36263cc30e,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f6bd2d4e02a8641c13f3c829d7ea7b8a267f5ef2d586d0ca0145fe73d582fc,PodSandboxId:b402d7218b1dbe08157047e846
e520d2bf0d1fa9131088e7c73e47689a3b6a91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765346542531721071,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7v74x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4707d892-af08-4059-a693-2e05840d221e,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d45a8de7df26224509ea8aff4ab6a397ba984d567fab8eebc405943f747fefc,PodSandboxId:68e573695195509c03de612062f2299de9178824dc9b40c9aead1077ad8
ab3e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765346542411017008,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201b80fd997ce091bd5c05d64ceda09d,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b71a11752f1091e04da2f0bf7705df7d486a2cebe5a86ee1cb28beeb7a
dbd32,PodSandboxId:c3b6fd9071eb20056bf229cf26a14132ffaf00bdb7a4163be60ad3574586911d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765346513588844815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab242a4c-35c9-4fef-9a3a-5c1e1717225e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9d6629ac789255c9295ae5b0442910062b0d5332b2af1e6f526070b4e657dd,PodSand
boxId:a240334f39d4d729afc0cc953c9778703ebe062f2151aafb18393ed85473f0d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765346502868903350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7v74x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4707d892-af08-4059-a693-2e05840d221e,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33548d557bb40b0552c7f8275cf35d645d41237e817e85704f3fcfbda0c10621,PodSandboxId:09ac8ad73ee77e6723a39c67219b
93bb19b65cd2e7ab3ef335fa9a1711876d37,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765346500193692022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201b80fd997ce091bd5c05d64ceda09d,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc895e0e74f8d61d1ab39b2b340b4
0b857cb1e26f1f2148da431abb016debcd9,PodSandboxId:3dce0aebedba977e373e41879db1b1582294d8eb5ea34a9062628691a5714a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765346500226693912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19188f1a520cc8cda23c4f36263cc30e,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dacbe5b37ec930f2eceb458267c8984cfb952746601de7f1eca23f7abff458,PodSandboxId:e653eccbaa11b141c0bdd046377d2b552a89babd7e9b32c029cbfb54a6533c98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765346500196235747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820864d487c46dbb13dc25752422a6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernet
es.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f657dd69967dd32d4153bdf62bcf28987f2a83e1094875810b3f59ef66ae862d,PodSandboxId:28dccbf2db19449ef67eb9744865901966ddfcaf24d6e9b34e0a7fddb0160a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765346497944427046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-2j4gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a5bd55-8567-4987-8b52-a8afdde6d923,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ec42839-6bbd-42cc-9d7b-d21bbcdfdae2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.103843357Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f42cb4b-3fcf-4b55-916f-9d8de117337f name=/runtime.v1.RuntimeService/Version
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.103970738Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f42cb4b-3fcf-4b55-916f-9d8de117337f name=/runtime.v1.RuntimeService/Version
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.105551585Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e434ef9-a6f2-4019-abf2-77ff4c95d6ff name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.106312911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765347235106242206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242143,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e434ef9-a6f2-4019-abf2-77ff4c95d6ff name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.107887436Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1fbf6d2-36bb-42b1-a2b5-6806ddcd9449 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.108170174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1fbf6d2-36bb-42b1-a2b5-6806ddcd9449 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 06:13:55 functional-399582 crio[5970]: time="2025-12-10 06:13:55.108524618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:350c6d4bb4039b4a3fd8595f77a258de3a8ff8f1a96864099c6a73a6754260e9,PodSandboxId:7a993bbe441183f0bf97d23ca6dbcb78acacf0bde685a13465efd52b01a09ead,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765346630782898986,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b288f573-315c-487e-972f-525794180d08,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ac9989d94c01bc39b2f41658799119dd112b19f141f601945be966c3a02809,PodSandboxId:f528bb56b9b49acca32d606bd68956fd448122fb545a21f95623fcd28d5760cf,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765346625210836509,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 735ed8f9-562b-402d-a721-a403dd9bf390,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338bc49c99044ee9a046d1af4f7a27f1d97bd42edbba5b9da4a932c48135272e,PodSandboxId:805e789926c8e36bea92ce378548f5eee400c13a9503bc90ead2d95b8f082132,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765346617787382429,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-k8mpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d6d84736-d96b-4bb0-9ace-fe6ef83567c2,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9290df2a81574e8780c2eca1758d81da038c03db07a7d7f65c207c7ef65e6f62,PodSandboxId:f0b953c47b3c14fc52563375bacd418c5d8c14c7385432688d6ba8613e49a847,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765346547591023112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab242a4c-35c9-4fef-9a3a-5c1e1717225e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kub
ernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d295ca1de581b5ba2b941672dc00e5ec3001032e9e93c0623a4da21f6fe608,PodSandboxId:88c94a4a47516a0d2eb8315e1292867f40e00c5e3afcc18b76cda2aad4743c5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765346547604214417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-2j4gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a5bd55-8567-4987-8b52-a8afdde6d923,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b1d362125b3092fb66e0b16178c53c7c2f11abf3edb2e509aa2589e946b1be,PodSandboxId:30886e95d556e56b7a68fd7f082d67495f0c49e41924466ee4fa5785f2a8ad72,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,
CreatedAt:1765346544954433088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddd915c1115cab07e79ed04626784ed,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f34bcb44a6a4e1791c2070a389a50fa785c25b3922cd9c5ce900f335b98e99,PodSandboxId:2e6503bc41d2f26fb57c8e3aacb8770886c9faf2c7db52de27e040d25452a29f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765346542843696434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820864d487c46dbb13dc25752422a6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5cb4f1b5b0dbd284b012c6ca448e3be248dbbd5b23099728012529c8ed8d01,PodSandboxId:a49e5dbc4ecb85f32390dcebe91a585e244dea76d26c29d77cb0c81ed930af99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a5
6602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765346542659318388,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19188f1a520cc8cda23c4f36263cc30e,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f6bd2d4e02a8641c13f3c829d7ea7b8a267f5ef2d586d0ca0145fe73d582fc,PodSandboxId:b402d7218b1dbe08157047e846
e520d2bf0d1fa9131088e7c73e47689a3b6a91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765346542531721071,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7v74x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4707d892-af08-4059-a693-2e05840d221e,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d45a8de7df26224509ea8aff4ab6a397ba984d567fab8eebc405943f747fefc,PodSandboxId:68e573695195509c03de612062f2299de9178824dc9b40c9aead1077ad8
ab3e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765346542411017008,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201b80fd997ce091bd5c05d64ceda09d,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b71a11752f1091e04da2f0bf7705df7d486a2cebe5a86ee1cb28beeb7a
dbd32,PodSandboxId:c3b6fd9071eb20056bf229cf26a14132ffaf00bdb7a4163be60ad3574586911d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765346513588844815,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab242a4c-35c9-4fef-9a3a-5c1e1717225e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9d6629ac789255c9295ae5b0442910062b0d5332b2af1e6f526070b4e657dd,PodSand
boxId:a240334f39d4d729afc0cc953c9778703ebe062f2151aafb18393ed85473f0d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765346502868903350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7v74x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4707d892-af08-4059-a693-2e05840d221e,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33548d557bb40b0552c7f8275cf35d645d41237e817e85704f3fcfbda0c10621,PodSandboxId:09ac8ad73ee77e6723a39c67219b
93bb19b65cd2e7ab3ef335fa9a1711876d37,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765346500193692022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201b80fd997ce091bd5c05d64ceda09d,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc895e0e74f8d61d1ab39b2b340b4
0b857cb1e26f1f2148da431abb016debcd9,PodSandboxId:3dce0aebedba977e373e41879db1b1582294d8eb5ea34a9062628691a5714a7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765346500226693912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19188f1a520cc8cda23c4f36263cc30e,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34dacbe5b37ec930f2eceb458267c8984cfb952746601de7f1eca23f7abff458,PodSandboxId:e653eccbaa11b141c0bdd046377d2b552a89babd7e9b32c029cbfb54a6533c98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765346500196235747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-399582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820864d487c46dbb13dc25752422a6c0,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernet
es.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f657dd69967dd32d4153bdf62bcf28987f2a83e1094875810b3f59ef66ae862d,PodSandboxId:28dccbf2db19449ef67eb9744865901966ddfcaf24d6e9b34e0a7fddb0160a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765346497944427046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-2j4gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a5bd55-8567-4987-8b52-a8afdde6d923,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1fbf6d2-36bb-42b1-a2b5-6806ddcd9449 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	350c6d4bb4039       d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9                                              10 minutes ago      Running             myfrontend                0                   7a993bbe44118       sp-pod                                      default
	22ac9989d94c0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           10 minutes ago      Exited              mount-munger              0                   f528bb56b9b49       busybox-mount                               default
	338bc49c99044       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   10 minutes ago      Running             mysql                     0                   805e789926c8e       mysql-7d7b65bc95-k8mpc                      default
	01d295ca1de58       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              11 minutes ago      Running             coredns                   3                   88c94a4a47516       coredns-7d764666f9-2j4gx                    kube-system
	9290df2a81574       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              11 minutes ago      Running             storage-provisioner       5                   f0b953c47b3c1       storage-provisioner                         kube-system
	15b1d362125b3       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                              11 minutes ago      Running             kube-apiserver            0                   30886e95d556e       kube-apiserver-functional-399582            kube-system
	54f34bcb44a6a       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                              11 minutes ago      Running             kube-scheduler            3                   2e6503bc41d2f       kube-scheduler-functional-399582            kube-system
	6c5cb4f1b5b0d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                              11 minutes ago      Running             kube-controller-manager   3                   a49e5dbc4ecb8       kube-controller-manager-functional-399582   kube-system
	00f6bd2d4e02a       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                              11 minutes ago      Running             kube-proxy                3                   b402d7218b1db       kube-proxy-7v74x                            kube-system
	7d45a8de7df26       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                              11 minutes ago      Running             etcd                      3                   68e5736951955       etcd-functional-399582                      kube-system
	2b71a11752f10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              12 minutes ago      Exited              storage-provisioner       4                   c3b6fd9071eb2       storage-provisioner                         kube-system
	8c9d6629ac789       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                              12 minutes ago      Exited              kube-proxy                2                   a240334f39d4d       kube-proxy-7v74x                            kube-system
	dc895e0e74f8d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                              12 minutes ago      Exited              kube-controller-manager   2                   3dce0aebedba9       kube-controller-manager-functional-399582   kube-system
	34dacbe5b37ec       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                              12 minutes ago      Exited              kube-scheduler            2                   e653eccbaa11b       kube-scheduler-functional-399582            kube-system
	33548d557bb40       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                              12 minutes ago      Exited              etcd                      2                   09ac8ad73ee77       etcd-functional-399582                      kube-system
	f657dd69967dd       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              12 minutes ago      Exited              coredns                   2                   28dccbf2db194       coredns-7d764666f9-2j4gx                    kube-system
	
	
	==> coredns [01d295ca1de581b5ba2b941672dc00e5ec3001032e9e93c0623a4da21f6fe608] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53666 - 39596 "HINFO IN 9075107278779861811.4103704832365595779. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.153919535s
	
	
	==> coredns [f657dd69967dd32d4153bdf62bcf28987f2a83e1094875810b3f59ef66ae862d] <==
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38741 - 41185 "HINFO IN 4468328331559939092.7401650893011338442. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028651389s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-399582
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-399582
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=functional-399582
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_00_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:00:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-399582
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:13:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:13:06 +0000   Wed, 10 Dec 2025 06:00:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:13:06 +0000   Wed, 10 Dec 2025 06:00:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:13:06 +0000   Wed, 10 Dec 2025 06:00:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:13:06 +0000   Wed, 10 Dec 2025 06:00:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.120
	  Hostname:    functional-399582
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 4561933152674b80bf8a351184459707
	  System UUID:                45619331-5267-4b80-bf8a-351184459707
	  Boot ID:                    465e07fa-e16f-45c3-9894-63f201e0bd6a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-j9h2z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-9f67c86d4-ks5cc            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-7d7b65bc95-k8mpc                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    11m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7d764666f9-2j4gx                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-functional-399582                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-functional-399582              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-399582     200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-7v74x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-399582              100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-nqcvt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-psrc4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  13m   node-controller  Node functional-399582 event: Registered Node functional-399582 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node functional-399582 event: Registered Node functional-399582 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node functional-399582 event: Registered Node functional-399582 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node functional-399582 event: Registered Node functional-399582 in Controller
	
	
	==> dmesg <==
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084806] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.112029] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.130538] kauditd_printk_skb: 199 callbacks suppressed
	[  +0.000239] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.649001] kauditd_printk_skb: 249 callbacks suppressed
	[Dec10 06:01] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.111759] kauditd_printk_skb: 11 callbacks suppressed
	[  +1.929289] kauditd_printk_skb: 328 callbacks suppressed
	[  +5.527938] kauditd_printk_skb: 26 callbacks suppressed
	[  +3.622908] kauditd_printk_skb: 98 callbacks suppressed
	[  +7.397086] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 06:02] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.110469] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.640447] kauditd_printk_skb: 78 callbacks suppressed
	[  +3.353148] kauditd_printk_skb: 290 callbacks suppressed
	[ +18.397540] kauditd_printk_skb: 36 callbacks suppressed
	[  +0.093117] kauditd_printk_skb: 112 callbacks suppressed
	[  +0.000039] kauditd_printk_skb: 53 callbacks suppressed
	[Dec10 06:03] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.456310] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.041754] kauditd_printk_skb: 48 callbacks suppressed
	[  +2.569447] crun[10809]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.392599] kauditd_printk_skb: 158 callbacks suppressed
	
	
	==> etcd [33548d557bb40b0552c7f8275cf35d645d41237e817e85704f3fcfbda0c10621] <==
	{"level":"info","ts":"2025-12-10T06:01:40.728199Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:01:40.729118Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:01:40.738662Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-10T06:01:40.740136Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:01:40.740180Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:01:40.745985Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T06:01:40.751451Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.120:2379"}
	{"level":"info","ts":"2025-12-10T06:02:07.356669Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-10T06:02:07.359714Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-399582","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.120:2380"],"advertise-client-urls":["https://192.168.50.120:2379"]}
	{"level":"error","ts":"2025-12-10T06:02:07.375647Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T06:02:07.452960Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T06:02:07.454643Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-10T06:02:07.454973Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T06:02:07.455228Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T06:02:07.455499Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T06:02:07.455116Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4380e8ffa07dad0","current-leader-member-id":"d4380e8ffa07dad0"}
	{"level":"warn","ts":"2025-12-10T06:02:07.455188Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.120:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T06:02:07.455603Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.120:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T06:02:07.455612Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.120:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T06:02:07.455659Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-10T06:02:07.455677Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-10T06:02:07.459518Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.120:2380"}
	{"level":"error","ts":"2025-12-10T06:02:07.459591Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.120:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T06:02:07.459614Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.120:2380"}
	{"level":"info","ts":"2025-12-10T06:02:07.459619Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-399582","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.120:2380"],"advertise-client-urls":["https://192.168.50.120:2379"]}
	
	
	==> etcd [7d45a8de7df26224509ea8aff4ab6a397ba984d567fab8eebc405943f747fefc] <==
	{"level":"info","ts":"2025-12-10T06:03:29.732511Z","caller":"traceutil/trace.go:172","msg":"trace[1988418188] transaction","detail":"{read_only:false; response_revision:870; number_of_response:1; }","duration":"152.04767ms","start":"2025-12-10T06:03:29.580450Z","end":"2025-12-10T06:03:29.732498Z","steps":["trace[1988418188] 'process raft request'  (duration: 149.665685ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:03:32.184689Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.068897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:03:32.184758Z","caller":"traceutil/trace.go:172","msg":"trace[1516461909] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:871; }","duration":"204.147737ms","start":"2025-12-10T06:03:31.980600Z","end":"2025-12-10T06:03:32.184747Z","steps":["trace[1516461909] 'range keys from in-memory index tree'  (duration: 204.017724ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:03:32.184917Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"208.507306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:03:32.184985Z","caller":"traceutil/trace.go:172","msg":"trace[1367031098] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:871; }","duration":"208.527207ms","start":"2025-12-10T06:03:31.976401Z","end":"2025-12-10T06:03:32.184929Z","steps":["trace[1367031098] 'range keys from in-memory index tree'  (duration: 208.440076ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:03:32.185598Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"188.573308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:03:32.186337Z","caller":"traceutil/trace.go:172","msg":"trace[239063490] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:871; }","duration":"189.40771ms","start":"2025-12-10T06:03:31.996900Z","end":"2025-12-10T06:03:32.186308Z","steps":["trace[239063490] 'range keys from in-memory index tree'  (duration: 187.691756ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:03:34.083869Z","caller":"traceutil/trace.go:172","msg":"trace[558704773] transaction","detail":"{read_only:false; response_revision:872; number_of_response:1; }","duration":"214.439426ms","start":"2025-12-10T06:03:33.869417Z","end":"2025-12-10T06:03:34.083856Z","steps":["trace[558704773] 'process raft request'  (duration: 214.272956ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:03:34.084791Z","caller":"traceutil/trace.go:172","msg":"trace[1316996738] linearizableReadLoop","detail":"{readStateIndex:958; appliedIndex:958; }","duration":"108.118952ms","start":"2025-12-10T06:03:33.976659Z","end":"2025-12-10T06:03:34.084778Z","steps":["trace[1316996738] 'read index received'  (duration: 106.960887ms)","trace[1316996738] 'applied index is now lower than readState.Index'  (duration: 3.769µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T06:03:34.084906Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.260327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:03:34.084928Z","caller":"traceutil/trace.go:172","msg":"trace[1302307117] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:872; }","duration":"108.291985ms","start":"2025-12-10T06:03:33.976629Z","end":"2025-12-10T06:03:34.084921Z","steps":["trace[1302307117] 'agreement among raft nodes before linearized reading'  (duration: 108.239758ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:03:34.085077Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.888434ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:03:34.085117Z","caller":"traceutil/trace.go:172","msg":"trace[859508988] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:872; }","duration":"105.928914ms","start":"2025-12-10T06:03:33.979182Z","end":"2025-12-10T06:03:34.085111Z","steps":["trace[859508988] 'agreement among raft nodes before linearized reading'  (duration: 105.87746ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:03:36.641621Z","caller":"traceutil/trace.go:172","msg":"trace[285473757] linearizableReadLoop","detail":"{readStateIndex:959; appliedIndex:959; }","duration":"328.226725ms","start":"2025-12-10T06:03:36.313376Z","end":"2025-12-10T06:03:36.641603Z","steps":["trace[285473757] 'read index received'  (duration: 328.221955ms)","trace[285473757] 'applied index is now lower than readState.Index'  (duration: 4.08µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T06:03:36.641759Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"328.373379ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:03:36.641801Z","caller":"traceutil/trace.go:172","msg":"trace[776500912] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:872; }","duration":"328.428571ms","start":"2025-12-10T06:03:36.313365Z","end":"2025-12-10T06:03:36.641794Z","steps":["trace[776500912] 'agreement among raft nodes before linearized reading'  (duration: 328.342852ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:03:36.642700Z","caller":"traceutil/trace.go:172","msg":"trace[463181425] transaction","detail":"{read_only:false; response_revision:873; number_of_response:1; }","duration":"530.748137ms","start":"2025-12-10T06:03:36.111941Z","end":"2025-12-10T06:03:36.642689Z","steps":["trace[463181425] 'process raft request'  (duration: 530.387731ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:03:36.643189Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-10T06:03:36.111920Z","time spent":"530.806841ms","remote":"127.0.0.1:46456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:872 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-10T06:04:05.175188Z","caller":"traceutil/trace.go:172","msg":"trace[2015403361] linearizableReadLoop","detail":"{readStateIndex:1105; appliedIndex:1105; }","duration":"196.979812ms","start":"2025-12-10T06:04:04.978181Z","end":"2025-12-10T06:04:05.175160Z","steps":["trace[2015403361] 'read index received'  (duration: 196.975487ms)","trace[2015403361] 'applied index is now lower than readState.Index'  (duration: 3.69µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T06:04:05.175359Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.150251ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-10T06:04:05.175385Z","caller":"traceutil/trace.go:172","msg":"trace[1256793997] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1010; }","duration":"197.202581ms","start":"2025-12-10T06:04:04.978177Z","end":"2025-12-10T06:04:05.175379Z","steps":["trace[1256793997] 'agreement among raft nodes before linearized reading'  (duration: 197.076503ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:04:05.175511Z","caller":"traceutil/trace.go:172","msg":"trace[2085411194] transaction","detail":"{read_only:false; response_revision:1011; number_of_response:1; }","duration":"205.526406ms","start":"2025-12-10T06:04:04.969973Z","end":"2025-12-10T06:04:05.175499Z","steps":["trace[2085411194] 'process raft request'  (duration: 205.271182ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:12:25.407374Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1216}
	{"level":"info","ts":"2025-12-10T06:12:25.433601Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1216,"took":"25.711813ms","hash":4283206531,"current-db-size-bytes":3788800,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1908736,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-12-10T06:12:25.433709Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4283206531,"revision":1216,"compact-revision":-1}
	
	
	==> kernel <==
	 06:13:55 up 14 min,  0 users,  load average: 0.95, 0.46, 0.33
	Linux functional-399582 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [15b1d362125b3092fb66e0b16178c53c7c2f11abf3edb2e509aa2589e946b1be] <==
	I1210 06:02:26.889348       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:02:26.929367       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:02:27.390736       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:02:27.697098       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:02:28.662765       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:02:28.724196       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:02:28.767987       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:02:28.776874       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:02:29.854631       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:02:30.052589       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:02:47.295902       1 alloc.go:329] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.111.132"}
	I1210 06:02:51.836462       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:02:51.963935       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.81.189"}
	I1210 06:02:52.612177       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.154.107"}
	E1210 06:03:46.020463       1 conn.go:339] Error on socket receive: read tcp 192.168.50.120:8441->192.168.50.1:58956: use of closed network connection
	E1210 06:03:47.645067       1 conn.go:339] Error on socket receive: read tcp 192.168.50.120:8441->192.168.50.1:38392: use of closed network connection
	E1210 06:03:48.800925       1 conn.go:339] Error on socket receive: read tcp 192.168.50.120:8441->192.168.50.1:38422: use of closed network connection
	E1210 06:03:50.750885       1 conn.go:339] Error on socket receive: read tcp 192.168.50.120:8441->192.168.50.1:38454: use of closed network connection
	I1210 06:03:52.779925       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:03:53.114452       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.90.246"}
	I1210 06:03:53.153470       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.243.139"}
	E1210 06:03:53.500768       1 conn.go:339] Error on socket receive: read tcp 192.168.50.120:8441->192.168.50.1:38478: use of closed network connection
	I1210 06:03:53.715584       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.241.71"}
	E1210 06:03:58.036437       1 conn.go:339] Error on socket receive: read tcp 192.168.50.120:8441->192.168.50.1:41526: use of closed network connection
	I1210 06:12:26.823660       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6c5cb4f1b5b0dbd284b012c6ca448e3be248dbbd5b23099728012529c8ed8d01] <==
	I1210 06:02:29.813987       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.814096       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.814122       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.819585       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.831376       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.859735       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.863993       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.865202       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.865411       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.865440       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.866067       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.866235       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.866489       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.873095       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.873108       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:02:29.873112       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:02:29.878549       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:29.908705       1 shared_informer.go:377] "Caches are synced"
	E1210 06:03:52.901570       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 06:03:52.918491       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 06:03:52.930744       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 06:03:52.945711       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 06:03:52.953566       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 06:03:52.960160       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1210 06:03:52.960307       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [dc895e0e74f8d61d1ab39b2b340b40b857cb1e26f1f2148da431abb016debcd9] <==
	I1210 06:01:45.546440       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.548544       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1210 06:01:45.549322       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:01:45.549365       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.546459       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.546466       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.546472       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.546483       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.546685       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.550963       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1210 06:01:45.551409       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-399582"
	I1210 06:01:45.551472       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1210 06:01:45.546953       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.546961       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.551593       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.551646       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.551669       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.548023       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.541885       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.602522       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.647481       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.647537       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:45.647543       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:01:45.647547       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:01:45.957439       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [00f6bd2d4e02a8641c13f3c829d7ea7b8a267f5ef2d586d0ca0145fe73d582fc] <==
	I1210 06:02:23.233045       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:02:27.135719       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:27.135761       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.120"]
	E1210 06:02:27.135872       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:02:27.173126       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 06:02:27.173199       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 06:02:27.173221       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:02:27.184408       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:02:27.184827       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1210 06:02:27.184871       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:02:27.189828       1 config.go:200] "Starting service config controller"
	I1210 06:02:27.189881       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:02:27.189909       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:02:27.189924       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:02:27.189945       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:02:27.189958       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:02:27.190769       1 config.go:309] "Starting node config controller"
	I1210 06:02:27.190811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:02:27.190827       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:02:27.290017       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:02:27.290060       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:02:27.290095       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [8c9d6629ac789255c9295ae5b0442910062b0d5332b2af1e6f526070b4e657dd] <==
	I1210 06:01:43.130899       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:01:43.231750       1 shared_informer.go:377] "Caches are synced"
	I1210 06:01:43.231846       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.120"]
	E1210 06:01:43.231918       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:01:43.284185       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 06:01:43.284368       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 06:01:43.284394       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:01:43.294955       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:01:43.296355       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1210 06:01:43.296416       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:01:43.303620       1 config.go:200] "Starting service config controller"
	I1210 06:01:43.303677       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:01:43.303694       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:01:43.303698       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:01:43.303710       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:01:43.303723       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:01:43.306467       1 config.go:309] "Starting node config controller"
	I1210 06:01:43.306788       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:01:43.306819       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:01:43.404750       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:01:43.404809       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:01:43.405192       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [34dacbe5b37ec930f2eceb458267c8984cfb952746601de7f1eca23f7abff458] <==
	I1210 06:01:40.977506       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:01:42.287034       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:01:42.287140       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:01:42.287151       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:01:42.287157       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:01:42.400699       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1210 06:01:42.400795       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:01:42.414606       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:01:42.415148       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:01:42.416387       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:01:42.415162       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:01:42.517587       1 shared_informer.go:377] "Caches are synced"
	I1210 06:02:07.379186       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1210 06:02:07.381172       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1210 06:02:07.384358       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1210 06:02:07.384534       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [54f34bcb44a6a4e1791c2070a389a50fa785c25b3922cd9c5ce900f335b98e99] <==
	I1210 06:02:25.191251       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:02:26.759517       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:02:26.760339       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:02:26.760394       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:02:26.760412       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:02:26.813752       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1210 06:02:26.813859       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:02:26.827377       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:02:26.827471       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:02:26.827509       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:02:26.827667       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:02:26.928312       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:13:24 functional-399582 kubelet[6885]: E1210 06:13:24.463601    6885 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod820864d487c46dbb13dc25752422a6c0/crio-e653eccbaa11b141c0bdd046377d2b552a89babd7e9b32c029cbfb54a6533c98: Error finding container e653eccbaa11b141c0bdd046377d2b552a89babd7e9b32c029cbfb54a6533c98: Status 404 returned error can't find the container with id e653eccbaa11b141c0bdd046377d2b552a89babd7e9b32c029cbfb54a6533c98
	Dec 10 06:13:24 functional-399582 kubelet[6885]: E1210 06:13:24.464009    6885 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod19188f1a520cc8cda23c4f36263cc30e/crio-3dce0aebedba977e373e41879db1b1582294d8eb5ea34a9062628691a5714a7a: Error finding container 3dce0aebedba977e373e41879db1b1582294d8eb5ea34a9062628691a5714a7a: Status 404 returned error can't find the container with id 3dce0aebedba977e373e41879db1b1582294d8eb5ea34a9062628691a5714a7a
	Dec 10 06:13:24 functional-399582 kubelet[6885]: E1210 06:13:24.464384    6885 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod201b80fd997ce091bd5c05d64ceda09d/crio-09ac8ad73ee77e6723a39c67219b93bb19b65cd2e7ab3ef335fa9a1711876d37: Error finding container 09ac8ad73ee77e6723a39c67219b93bb19b65cd2e7ab3ef335fa9a1711876d37: Status 404 returned error can't find the container with id 09ac8ad73ee77e6723a39c67219b93bb19b65cd2e7ab3ef335fa9a1711876d37
	Dec 10 06:13:24 functional-399582 kubelet[6885]: E1210 06:13:24.464720    6885 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod4707d892-af08-4059-a693-2e05840d221e/crio-a240334f39d4d729afc0cc953c9778703ebe062f2151aafb18393ed85473f0d0: Error finding container a240334f39d4d729afc0cc953c9778703ebe062f2151aafb18393ed85473f0d0: Status 404 returned error can't find the container with id a240334f39d4d729afc0cc953c9778703ebe062f2151aafb18393ed85473f0d0
	Dec 10 06:13:24 functional-399582 kubelet[6885]: E1210 06:13:24.796205    6885 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765347204795808367  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242143}  inodes_used:{value:113}}"
	Dec 10 06:13:24 functional-399582 kubelet[6885]: E1210 06:13:24.796250    6885 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765347204795808367  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242143}  inodes_used:{value:113}}"
	Dec 10 06:13:25 functional-399582 kubelet[6885]: E1210 06:13:25.372433    6885 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-psrc4" containerName="kubernetes-dashboard"
	Dec 10 06:13:28 functional-399582 kubelet[6885]: E1210 06:13:28.373432    6885 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-nqcvt" containerName="dashboard-metrics-scraper"
	Dec 10 06:13:28 functional-399582 kubelet[6885]: E1210 06:13:28.376390    6885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-nqcvt" podUID="2b5b8546-f5df-45fc-aa4b-23e2669e7550"
	Dec 10 06:13:29 functional-399582 kubelet[6885]: E1210 06:13:29.373612    6885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-j9h2z" podUID="d571e289-1b93-4436-8c66-1fdca8cd4ac6"
	Dec 10 06:13:34 functional-399582 kubelet[6885]: E1210 06:13:34.376084    6885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-ks5cc" podUID="b0bd85fb-560c-49b9-98cf-8d7c2db9b6bf"
	Dec 10 06:13:34 functional-399582 kubelet[6885]: E1210 06:13:34.798588    6885 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765347214798062084  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242143}  inodes_used:{value:113}}"
	Dec 10 06:13:34 functional-399582 kubelet[6885]: E1210 06:13:34.798629    6885 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765347214798062084  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242143}  inodes_used:{value:113}}"
	Dec 10 06:13:40 functional-399582 kubelet[6885]: E1210 06:13:40.373153    6885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-j9h2z" podUID="d571e289-1b93-4436-8c66-1fdca8cd4ac6"
	Dec 10 06:13:42 functional-399582 kubelet[6885]: E1210 06:13:42.372776    6885 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-nqcvt" containerName="dashboard-metrics-scraper"
	Dec 10 06:13:42 functional-399582 kubelet[6885]: E1210 06:13:42.374242    6885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-nqcvt" podUID="2b5b8546-f5df-45fc-aa4b-23e2669e7550"
	Dec 10 06:13:44 functional-399582 kubelet[6885]: E1210 06:13:44.801493    6885 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765347224801081722  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242143}  inodes_used:{value:113}}"
	Dec 10 06:13:44 functional-399582 kubelet[6885]: E1210 06:13:44.801516    6885 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765347224801081722  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242143}  inodes_used:{value:113}}"
	Dec 10 06:13:47 functional-399582 kubelet[6885]: E1210 06:13:47.373351    6885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-ks5cc" podUID="b0bd85fb-560c-49b9-98cf-8d7c2db9b6bf"
	Dec 10 06:13:50 functional-399582 kubelet[6885]: E1210 06:13:50.372670    6885 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-2j4gx" containerName="coredns"
	Dec 10 06:13:52 functional-399582 kubelet[6885]: E1210 06:13:52.373865    6885 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-399582" containerName="etcd"
	Dec 10 06:13:52 functional-399582 kubelet[6885]: E1210 06:13:52.374381    6885 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-399582" containerName="kube-scheduler"
	Dec 10 06:13:53 functional-399582 kubelet[6885]: E1210 06:13:53.373710    6885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-j9h2z" podUID="d571e289-1b93-4436-8c66-1fdca8cd4ac6"
	Dec 10 06:13:54 functional-399582 kubelet[6885]: E1210 06:13:54.804016    6885 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765347234803358605  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242143}  inodes_used:{value:113}}"
	Dec 10 06:13:54 functional-399582 kubelet[6885]: E1210 06:13:54.804039    6885 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765347234803358605  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242143}  inodes_used:{value:113}}"
	
	
	==> storage-provisioner [2b71a11752f1091e04da2f0bf7705df7d486a2cebe5a86ee1cb28beeb7adbd32] <==
	I1210 06:01:53.668408       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:01:53.680228       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:01:53.680412       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:01:53.683844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:01:57.139855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:02:01.400220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:02:04.999500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9290df2a81574e8780c2eca1758d81da038c03db07a7d7f65c207c7ef65e6f62] <==
	W1210 06:13:30.363019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:32.366975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:32.373241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:34.380404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:34.392027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:36.395049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:36.400794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:38.404216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:38.413840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:40.417042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:40.426812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:42.431896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:42.442003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:44.445991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:44.452419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:46.457058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:46.474353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:48.477694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:48.482793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:50.485854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:50.492517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:52.496563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:52.501523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:54.509372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:13:54.518362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-399582 -n functional-399582
helpers_test.go:270: (dbg) Run:  kubectl --context functional-399582 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-j9h2z hello-node-connect-9f67c86d4-ks5cc dashboard-metrics-scraper-5565989548-nqcvt kubernetes-dashboard-b84665fb8-psrc4
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-399582 describe pod busybox-mount hello-node-5758569b79-j9h2z hello-node-connect-9f67c86d4-ks5cc dashboard-metrics-scraper-5565989548-nqcvt kubernetes-dashboard-b84665fb8-psrc4
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-399582 describe pod busybox-mount hello-node-5758569b79-j9h2z hello-node-connect-9f67c86d4-ks5cc dashboard-metrics-scraper-5565989548-nqcvt kubernetes-dashboard-b84665fb8-psrc4: exit status 1 (96.859797ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-399582/192.168.50.120
	Start Time:       Wed, 10 Dec 2025 06:03:02 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://22ac9989d94c01bc39b2f41658799119dd112b19f141f601945be966c3a02809
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 10 Dec 2025 06:03:45 +0000
	      Finished:     Wed, 10 Dec 2025 06:03:45 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rl6qv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-rl6qv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-399582
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.464s (41.619s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Container created
	  Normal  Started    10m   kubelet            Container started
	
	
	Name:             hello-node-5758569b79-j9h2z
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-399582/192.168.50.120
	Start Time:       Wed, 10 Dec 2025 06:02:51 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rnbfz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rnbfz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/hello-node-5758569b79-j9h2z to functional-399582
	  Warning  Failed     4m34s (x4 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m5s (x5 over 11m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     93s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     93s                  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     16s (x17 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3s (x18 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-ks5cc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-399582/192.168.50.120
	Start Time:       Wed, 10 Dec 2025 06:03:53 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:           10.244.0.14
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k8bgp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k8bgp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-ks5cc to functional-399582
	  Normal   Pulling    2m23s (x4 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     33s (x4 over 8m5s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     33s (x4 over 8m5s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x6 over 8m4s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     9s (x6 over 8m4s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-nqcvt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-psrc4" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-399582 describe pod busybox-mount hello-node-5758569b79-j9h2z hello-node-connect-9f67c86d4-ks5cc dashboard-metrics-scraper-5565989548-nqcvt kubernetes-dashboard-b84665fb8-psrc4: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (602.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-399582 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-399582 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-j9h2z" [d571e289-1b93-4436-8c66-1fdca8cd4ac6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-399582 -n functional-399582
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-10 06:12:52.235371224 +0000 UTC m=+2656.286823359
functional_test.go:1460: (dbg) Run:  kubectl --context functional-399582 describe po hello-node-5758569b79-j9h2z -n default
functional_test.go:1460: (dbg) kubectl --context functional-399582 describe po hello-node-5758569b79-j9h2z -n default:
Name:             hello-node-5758569b79-j9h2z
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-399582/192.168.50.120
Start Time:       Wed, 10 Dec 2025 06:02:51 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rnbfz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rnbfz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-5758569b79-j9h2z to functional-399582
Warning  Failed     3m30s (x4 over 9m30s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m1s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     29s (x5 over 9m30s)    kubelet            Error: ErrImagePull
Warning  Failed     29s                    kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    3s (x13 over 9m30s)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     3s (x13 over 9m30s)    kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-399582 logs hello-node-5758569b79-j9h2z -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-399582 logs hello-node-5758569b79-j9h2z -n default: exit status 1 (73.782495ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-j9h2z" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-399582 logs hello-node-5758569b79-j9h2z -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399582 service --namespace=default --https --url hello-node: exit status 115 (253.721103ms)

                                                
                                                
-- stdout --
	https://192.168.50.120:31214
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-399582 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399582 service hello-node --url --format={{.IP}}: exit status 115 (256.399415ms)

                                                
                                                
-- stdout --
	192.168.50.120
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-399582 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399582 service hello-node --url: exit status 115 (263.487467ms)

                                                
                                                
-- stdout --
	http://192.168.50.120:31214
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-399582 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.50.120:31214
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (146.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-179913 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-179913 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m21.477924994s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-179913] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-179913" primary control-plane node in "pause-179913" cluster
	* Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-179913" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:03:38.437611  292755 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:03:38.437949  292755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:03:38.437962  292755 out.go:374] Setting ErrFile to fd 2...
	I1210 07:03:38.437968  292755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:03:38.438310  292755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 07:03:38.438811  292755 out.go:368] Setting JSON to false
	I1210 07:03:38.439830  292755 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31565,"bootTime":1765318653,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 07:03:38.439925  292755 start.go:143] virtualization: kvm guest
	I1210 07:03:38.442048  292755 out.go:179] * [pause-179913] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 07:03:38.443773  292755 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:03:38.443840  292755 notify.go:221] Checking for updates...
	I1210 07:03:38.447079  292755 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:03:38.448568  292755 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 07:03:38.452554  292755 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 07:03:38.453992  292755 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 07:03:38.455370  292755 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:03:38.457319  292755 config.go:182] Loaded profile config "pause-179913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:03:38.457865  292755 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:03:38.501282  292755 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 07:03:38.502512  292755 start.go:309] selected driver: kvm2
	I1210 07:03:38.502531  292755 start.go:927] validating driver "kvm2" against &{Name:pause-179913 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.3 ClusterName:pause-179913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.172 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:03:38.502722  292755 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:03:38.504112  292755 cni.go:84] Creating CNI manager for ""
	I1210 07:03:38.504204  292755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 07:03:38.504289  292755 start.go:353] cluster config:
	{Name:pause-179913 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-179913 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.172 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:03:38.504476  292755 iso.go:125] acquiring lock: {Name:mkd598cf63ca899d26ff5ae5308f8a58215a80b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:03:38.507303  292755 out.go:179] * Starting "pause-179913" primary control-plane node in "pause-179913" cluster
	I1210 07:03:38.508436  292755 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 07:03:38.531915  292755 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1210 07:03:38.558568  292755 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1210 07:03:38.558797  292755 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/pause-179913/config.json ...
	I1210 07:03:38.558969  292755 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:03:38.559150  292755 start.go:360] acquireMachinesLock for pause-179913: {Name:mk2161deb194f56aae2b0559c12fd0eb56fd317d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 07:03:38.735815  292755 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:03:38.907297  292755 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:03:39.081283  292755 cache.go:107] acquiring lock: {Name:mk4f601fcccaa8421d9a471640a96feb5df57ae1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:03:39.081294  292755 cache.go:107] acquiring lock: {Name:mka12e8a345a6dc24c0da40f31d69a169b73fc8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:03:39.081355  292755 cache.go:107] acquiring lock: {Name:mk72740fe8a4d4eb6e3ad18d28ff308f87f86eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:03:39.081283  292755 cache.go:107] acquiring lock: {Name:mk8a2b7c7103ad9b74ce0f1af971a5d8da1c8f6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:03:39.081336  292755 cache.go:107] acquiring lock: {Name:mkca46313d0e39171add494fd1f96b98422fb511 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:03:39.081421  292755 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 07:03:39.081427  292755 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:03:39.081432  292755 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3" took 185.18µs
	I1210 07:03:39.081437  292755 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 191.716µs
	I1210 07:03:39.081441  292755 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 07:03:39.081444  292755 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 07:03:39.081445  292755 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:03:39.081418  292755 cache.go:107] acquiring lock: {Name:mkc558d20fc07b350030510216ebcf1d2df4b57b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:03:39.081455  292755 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 07:03:39.081285  292755 cache.go:107] acquiring lock: {Name:mk2d5c3355eb914434f77fe8a549e7e27d61d8ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:03:39.081469  292755 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 07:03:39.081464  292755 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3" took 182.65µs
	I1210 07:03:39.081478  292755 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 07:03:39.081477  292755 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 123.04µs
	I1210 07:03:39.081426  292755 cache.go:107] acquiring lock: {Name:mkc561f0208895e5efe372932a5a00136ddcb2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:03:39.081502  292755 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:03:39.081509  292755 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 241.505µs
	I1210 07:03:39.081518  292755 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:03:39.081529  292755 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 07:03:39.081542  292755 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3" took 177.462µs
	I1210 07:03:39.081552  292755 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 07:03:39.081450  292755 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 116.628µs
	I1210 07:03:39.081560  292755 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 07:03:39.081485  292755 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 07:03:39.081644  292755 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 07:03:39.081668  292755 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3" took 247.875µs
	I1210 07:03:39.081682  292755 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 07:03:39.081693  292755 cache.go:87] Successfully saved all images to host disk.
	I1210 07:04:08.747918  292755 start.go:364] duration metric: took 30.188683303s to acquireMachinesLock for "pause-179913"
	I1210 07:04:08.747975  292755 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:04:08.747993  292755 fix.go:54] fixHost starting: 
	I1210 07:04:08.750668  292755 fix.go:112] recreateIfNeeded on pause-179913: state=Running err=<nil>
	W1210 07:04:08.750750  292755 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:04:08.754077  292755 out.go:252] * Updating the running kvm2 "pause-179913" VM ...
	I1210 07:04:08.754116  292755 machine.go:94] provisionDockerMachine start ...
	I1210 07:04:08.758367  292755 main.go:143] libmachine: domain pause-179913 has defined MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:08.759012  292755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:3e:b2", ip: ""} in network mk-pause-179913: {Iface:virbr5 ExpiryTime:2025-12-10 08:02:12 +0000 UTC Type:0 Mac:52:54:00:d0:3e:b2 Iaid: IPaddr:192.168.83.172 Prefix:24 Hostname:pause-179913 Clientid:01:52:54:00:d0:3e:b2}
	I1210 07:04:08.759060  292755 main.go:143] libmachine: domain pause-179913 has defined IP address 192.168.83.172 and MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:08.759312  292755 main.go:143] libmachine: Using SSH client type: native
	I1210 07:04:08.759628  292755 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.172 22 <nil> <nil>}
	I1210 07:04:08.759647  292755 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:04:08.892470  292755 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-179913
	
	I1210 07:04:08.892507  292755 buildroot.go:166] provisioning hostname "pause-179913"
	I1210 07:04:08.895636  292755 main.go:143] libmachine: domain pause-179913 has defined MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:08.896112  292755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:3e:b2", ip: ""} in network mk-pause-179913: {Iface:virbr5 ExpiryTime:2025-12-10 08:02:12 +0000 UTC Type:0 Mac:52:54:00:d0:3e:b2 Iaid: IPaddr:192.168.83.172 Prefix:24 Hostname:pause-179913 Clientid:01:52:54:00:d0:3e:b2}
	I1210 07:04:08.896147  292755 main.go:143] libmachine: domain pause-179913 has defined IP address 192.168.83.172 and MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:08.896396  292755 main.go:143] libmachine: Using SSH client type: native
	I1210 07:04:08.896710  292755 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.172 22 <nil> <nil>}
	I1210 07:04:08.896730  292755 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-179913 && echo "pause-179913" | sudo tee /etc/hostname
	I1210 07:04:09.048712  292755 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-179913
	
	I1210 07:04:09.052477  292755 main.go:143] libmachine: domain pause-179913 has defined MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:09.053026  292755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:3e:b2", ip: ""} in network mk-pause-179913: {Iface:virbr5 ExpiryTime:2025-12-10 08:02:12 +0000 UTC Type:0 Mac:52:54:00:d0:3e:b2 Iaid: IPaddr:192.168.83.172 Prefix:24 Hostname:pause-179913 Clientid:01:52:54:00:d0:3e:b2}
	I1210 07:04:09.053075  292755 main.go:143] libmachine: domain pause-179913 has defined IP address 192.168.83.172 and MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:09.053312  292755 main.go:143] libmachine: Using SSH client type: native
	I1210 07:04:09.053650  292755 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.172 22 <nil> <nil>}
	I1210 07:04:09.053683  292755 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-179913' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-179913/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-179913' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:04:09.181753  292755 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:04:09.181793  292755 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22094-243461/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-243461/.minikube}
	I1210 07:04:09.181860  292755 buildroot.go:174] setting up certificates
	I1210 07:04:09.181888  292755 provision.go:84] configureAuth start
	I1210 07:04:09.185683  292755 main.go:143] libmachine: domain pause-179913 has defined MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:09.186224  292755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:3e:b2", ip: ""} in network mk-pause-179913: {Iface:virbr5 ExpiryTime:2025-12-10 08:02:12 +0000 UTC Type:0 Mac:52:54:00:d0:3e:b2 Iaid: IPaddr:192.168.83.172 Prefix:24 Hostname:pause-179913 Clientid:01:52:54:00:d0:3e:b2}
	I1210 07:04:09.186269  292755 main.go:143] libmachine: domain pause-179913 has defined IP address 192.168.83.172 and MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:09.189239  292755 main.go:143] libmachine: domain pause-179913 has defined MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:09.189781  292755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:3e:b2", ip: ""} in network mk-pause-179913: {Iface:virbr5 ExpiryTime:2025-12-10 08:02:12 +0000 UTC Type:0 Mac:52:54:00:d0:3e:b2 Iaid: IPaddr:192.168.83.172 Prefix:24 Hostname:pause-179913 Clientid:01:52:54:00:d0:3e:b2}
	I1210 07:04:09.189821  292755 main.go:143] libmachine: domain pause-179913 has defined IP address 192.168.83.172 and MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:09.189994  292755 provision.go:143] copyHostCerts
	I1210 07:04:09.190092  292755 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-243461/.minikube/ca.pem, removing ...
	I1210 07:04:09.190113  292755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-243461/.minikube/ca.pem
	I1210 07:04:09.190185  292755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/ca.pem (1082 bytes)
	I1210 07:04:09.190326  292755 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-243461/.minikube/cert.pem, removing ...
	I1210 07:04:09.190342  292755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-243461/.minikube/cert.pem
	I1210 07:04:09.190379  292755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/cert.pem (1123 bytes)
	I1210 07:04:09.190481  292755 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-243461/.minikube/key.pem, removing ...
	I1210 07:04:09.190491  292755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-243461/.minikube/key.pem
	I1210 07:04:09.190521  292755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/key.pem (1675 bytes)
	I1210 07:04:09.190587  292755 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem org=jenkins.pause-179913 san=[127.0.0.1 192.168.83.172 localhost minikube pause-179913]
	I1210 07:04:09.246056  292755 provision.go:177] copyRemoteCerts
	I1210 07:04:09.246128  292755 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:04:09.249702  292755 main.go:143] libmachine: domain pause-179913 has defined MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:09.250259  292755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:3e:b2", ip: ""} in network mk-pause-179913: {Iface:virbr5 ExpiryTime:2025-12-10 08:02:12 +0000 UTC Type:0 Mac:52:54:00:d0:3e:b2 Iaid: IPaddr:192.168.83.172 Prefix:24 Hostname:pause-179913 Clientid:01:52:54:00:d0:3e:b2}
	I1210 07:04:09.250288  292755 main.go:143] libmachine: domain pause-179913 has defined IP address 192.168.83.172 and MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:09.250587  292755 sshutil.go:53] new ssh client: &{IP:192.168.83.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/pause-179913/id_rsa Username:docker}
	I1210 07:04:09.349527  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:04:09.391156  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 07:04:09.432968  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:04:09.470821  292755 provision.go:87] duration metric: took 288.893195ms to configureAuth
	I1210 07:04:09.470902  292755 buildroot.go:189] setting minikube options for container-runtime
	I1210 07:04:09.471192  292755 config.go:182] Loaded profile config "pause-179913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:04:09.474510  292755 main.go:143] libmachine: domain pause-179913 has defined MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:09.474864  292755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:3e:b2", ip: ""} in network mk-pause-179913: {Iface:virbr5 ExpiryTime:2025-12-10 08:02:12 +0000 UTC Type:0 Mac:52:54:00:d0:3e:b2 Iaid: IPaddr:192.168.83.172 Prefix:24 Hostname:pause-179913 Clientid:01:52:54:00:d0:3e:b2}
	I1210 07:04:09.474901  292755 main.go:143] libmachine: domain pause-179913 has defined IP address 192.168.83.172 and MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:09.475078  292755 main.go:143] libmachine: Using SSH client type: native
	I1210 07:04:09.475295  292755 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.172 22 <nil> <nil>}
	I1210 07:04:09.475308  292755 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:04:15.346572  292755 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:04:15.346604  292755 machine.go:97] duration metric: took 6.592478748s to provisionDockerMachine
	I1210 07:04:15.346621  292755 start.go:293] postStartSetup for "pause-179913" (driver="kvm2")
	I1210 07:04:15.346635  292755 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:04:15.346722  292755 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:04:15.350119  292755 main.go:143] libmachine: domain pause-179913 has defined MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:15.350805  292755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:3e:b2", ip: ""} in network mk-pause-179913: {Iface:virbr5 ExpiryTime:2025-12-10 08:02:12 +0000 UTC Type:0 Mac:52:54:00:d0:3e:b2 Iaid: IPaddr:192.168.83.172 Prefix:24 Hostname:pause-179913 Clientid:01:52:54:00:d0:3e:b2}
	I1210 07:04:15.350842  292755 main.go:143] libmachine: domain pause-179913 has defined IP address 192.168.83.172 and MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:15.351146  292755 sshutil.go:53] new ssh client: &{IP:192.168.83.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/pause-179913/id_rsa Username:docker}
	I1210 07:04:15.451055  292755 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:04:15.458139  292755 info.go:137] Remote host: Buildroot 2025.02
	I1210 07:04:15.458179  292755 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-243461/.minikube/addons for local assets ...
	I1210 07:04:15.458271  292755 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-243461/.minikube/files for local assets ...
	I1210 07:04:15.458401  292755 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem -> 2473662.pem in /etc/ssl/certs
	I1210 07:04:15.458533  292755 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:04:15.475973  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem --> /etc/ssl/certs/2473662.pem (1708 bytes)
	I1210 07:04:15.518419  292755 start.go:296] duration metric: took 171.777854ms for postStartSetup
	I1210 07:04:15.518470  292755 fix.go:56] duration metric: took 6.77047918s for fixHost
	I1210 07:04:15.521749  292755 main.go:143] libmachine: domain pause-179913 has defined MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:15.522285  292755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:3e:b2", ip: ""} in network mk-pause-179913: {Iface:virbr5 ExpiryTime:2025-12-10 08:02:12 +0000 UTC Type:0 Mac:52:54:00:d0:3e:b2 Iaid: IPaddr:192.168.83.172 Prefix:24 Hostname:pause-179913 Clientid:01:52:54:00:d0:3e:b2}
	I1210 07:04:15.522316  292755 main.go:143] libmachine: domain pause-179913 has defined IP address 192.168.83.172 and MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:15.522509  292755 main.go:143] libmachine: Using SSH client type: native
	I1210 07:04:15.522809  292755 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.172 22 <nil> <nil>}
	I1210 07:04:15.522825  292755 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 07:04:15.647036  292755 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765350255.642474238
	
	I1210 07:04:15.647064  292755 fix.go:216] guest clock: 1765350255.642474238
	I1210 07:04:15.647073  292755 fix.go:229] Guest: 2025-12-10 07:04:15.642474238 +0000 UTC Remote: 2025-12-10 07:04:15.518474664 +0000 UTC m=+37.140638679 (delta=123.999574ms)
	I1210 07:04:15.647096  292755 fix.go:200] guest clock delta is within tolerance: 123.999574ms
	I1210 07:04:15.647103  292755 start.go:83] releasing machines lock for "pause-179913", held for 6.899152033s
	I1210 07:04:15.650657  292755 main.go:143] libmachine: domain pause-179913 has defined MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:15.651203  292755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:3e:b2", ip: ""} in network mk-pause-179913: {Iface:virbr5 ExpiryTime:2025-12-10 08:02:12 +0000 UTC Type:0 Mac:52:54:00:d0:3e:b2 Iaid: IPaddr:192.168.83.172 Prefix:24 Hostname:pause-179913 Clientid:01:52:54:00:d0:3e:b2}
	I1210 07:04:15.651236  292755 main.go:143] libmachine: domain pause-179913 has defined IP address 192.168.83.172 and MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:15.651925  292755 ssh_runner.go:195] Run: cat /version.json
	I1210 07:04:15.652025  292755 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:04:15.656938  292755 main.go:143] libmachine: domain pause-179913 has defined MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:15.656938  292755 main.go:143] libmachine: domain pause-179913 has defined MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:15.657468  292755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:3e:b2", ip: ""} in network mk-pause-179913: {Iface:virbr5 ExpiryTime:2025-12-10 08:02:12 +0000 UTC Type:0 Mac:52:54:00:d0:3e:b2 Iaid: IPaddr:192.168.83.172 Prefix:24 Hostname:pause-179913 Clientid:01:52:54:00:d0:3e:b2}
	I1210 07:04:15.657498  292755 main.go:143] libmachine: domain pause-179913 has defined IP address 192.168.83.172 and MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:15.657565  292755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:3e:b2", ip: ""} in network mk-pause-179913: {Iface:virbr5 ExpiryTime:2025-12-10 08:02:12 +0000 UTC Type:0 Mac:52:54:00:d0:3e:b2 Iaid: IPaddr:192.168.83.172 Prefix:24 Hostname:pause-179913 Clientid:01:52:54:00:d0:3e:b2}
	I1210 07:04:15.657598  292755 main.go:143] libmachine: domain pause-179913 has defined IP address 192.168.83.172 and MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:15.657859  292755 sshutil.go:53] new ssh client: &{IP:192.168.83.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/pause-179913/id_rsa Username:docker}
	I1210 07:04:15.657922  292755 sshutil.go:53] new ssh client: &{IP:192.168.83.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/pause-179913/id_rsa Username:docker}
	I1210 07:04:15.779298  292755 ssh_runner.go:195] Run: systemctl --version
	I1210 07:04:15.787939  292755 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:04:15.953423  292755 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:04:15.965895  292755 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:04:15.966013  292755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:04:15.984393  292755 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:04:15.984427  292755 start.go:496] detecting cgroup driver to use...
	I1210 07:04:15.984519  292755 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:04:16.019619  292755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:04:16.047363  292755 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:04:16.047506  292755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:04:16.080454  292755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:04:16.105220  292755 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:04:16.367413  292755 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:04:16.646517  292755 docker.go:234] disabling docker service ...
	I1210 07:04:16.646592  292755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:04:16.691217  292755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:04:16.717902  292755 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:04:17.028528  292755 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:04:17.430446  292755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:04:17.497847  292755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:04:17.589181  292755 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:04:17.780583  292755 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:04:17.780864  292755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:04:17.823867  292755 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:04:17.823987  292755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:04:17.857010  292755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:04:17.906442  292755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:04:17.952442  292755 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:04:17.992613  292755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:04:18.031912  292755 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:04:18.067828  292755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:04:18.113185  292755 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:04:18.152772  292755 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:04:18.201044  292755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:04:18.738969  292755 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:04:19.466842  292755 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:04:19.467060  292755 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:04:19.477951  292755 start.go:564] Will wait 60s for crictl version
	I1210 07:04:19.478041  292755 ssh_runner.go:195] Run: which crictl
	I1210 07:04:19.485440  292755 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 07:04:19.540481  292755 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 07:04:19.540605  292755 ssh_runner.go:195] Run: crio --version
	I1210 07:04:19.579447  292755 ssh_runner.go:195] Run: crio --version
	I1210 07:04:19.623774  292755 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1210 07:04:19.628452  292755 main.go:143] libmachine: domain pause-179913 has defined MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:19.628981  292755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:3e:b2", ip: ""} in network mk-pause-179913: {Iface:virbr5 ExpiryTime:2025-12-10 08:02:12 +0000 UTC Type:0 Mac:52:54:00:d0:3e:b2 Iaid: IPaddr:192.168.83.172 Prefix:24 Hostname:pause-179913 Clientid:01:52:54:00:d0:3e:b2}
	I1210 07:04:19.629036  292755 main.go:143] libmachine: domain pause-179913 has defined IP address 192.168.83.172 and MAC address 52:54:00:d0:3e:b2 in network mk-pause-179913
	I1210 07:04:19.629342  292755 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1210 07:04:19.636947  292755 kubeadm.go:884] updating cluster {Name:pause-179913 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3
ClusterName:pause-179913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.172 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:04:19.637220  292755 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:04:19.801495  292755 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:04:19.985703  292755 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:04:20.179557  292755 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 07:04:20.179644  292755 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:04:20.472798  292755 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:04:20.472834  292755 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:04:20.472844  292755 kubeadm.go:935] updating node { 192.168.83.172 8443 v1.34.3 crio true true} ...
	I1210 07:04:20.473014  292755 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-179913 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-179913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:04:20.473129  292755 ssh_runner.go:195] Run: crio config
	I1210 07:04:20.617053  292755 cni.go:84] Creating CNI manager for ""
	I1210 07:04:20.617089  292755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 07:04:20.617115  292755 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:04:20.617155  292755 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.172 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-179913 NodeName:pause-179913 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:04:20.617346  292755 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-179913"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.172"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.172"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:04:20.617460  292755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 07:04:20.651385  292755 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:04:20.651476  292755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:04:20.693551  292755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1210 07:04:20.860271  292755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:04:20.950850  292755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1210 07:04:21.043779  292755 ssh_runner.go:195] Run: grep 192.168.83.172	control-plane.minikube.internal$ /etc/hosts
	I1210 07:04:21.057148  292755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:04:21.563182  292755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:04:21.654956  292755 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/pause-179913 for IP: 192.168.83.172
	I1210 07:04:21.654982  292755 certs.go:195] generating shared ca certs ...
	I1210 07:04:21.655002  292755 certs.go:227] acquiring lock for ca certs: {Name:mk2c8c8bbc628186be8cfd9c613269482a34a061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:04:21.655238  292755 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key
	I1210 07:04:21.655318  292755 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key
	I1210 07:04:21.655330  292755 certs.go:257] generating profile certs ...
	I1210 07:04:21.655443  292755 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/pause-179913/client.key
	I1210 07:04:21.655526  292755 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/pause-179913/apiserver.key.c09222c5
	I1210 07:04:21.655602  292755 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/pause-179913/proxy-client.key
	I1210 07:04:21.655780  292755 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366.pem (1338 bytes)
	W1210 07:04:21.655833  292755 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366_empty.pem, impossibly tiny 0 bytes
	I1210 07:04:21.655845  292755 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:04:21.655899  292755 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:04:21.655936  292755 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:04:21.655979  292755 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem (1675 bytes)
	I1210 07:04:21.656045  292755 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem (1708 bytes)
	I1210 07:04:21.657125  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:04:21.778617  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:04:21.881047  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:04:21.945243  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:04:22.007901  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/pause-179913/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 07:04:22.053842  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/pause-179913/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:04:22.100585  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/pause-179913/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:04:22.150655  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/pause-179913/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:04:22.193463  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366.pem --> /usr/share/ca-certificates/247366.pem (1338 bytes)
	I1210 07:04:22.241284  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem --> /usr/share/ca-certificates/2473662.pem (1708 bytes)
	I1210 07:04:22.316495  292755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:04:22.361841  292755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:04:22.391132  292755 ssh_runner.go:195] Run: openssl version
	I1210 07:04:22.399773  292755 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:04:22.416599  292755 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:04:22.434379  292755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:04:22.441655  292755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:04:22.441735  292755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:04:22.450961  292755 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:04:22.470256  292755 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/247366.pem
	I1210 07:04:22.489191  292755 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/247366.pem /etc/ssl/certs/247366.pem
	I1210 07:04:22.508148  292755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247366.pem
	I1210 07:04:22.516586  292755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:59 /usr/share/ca-certificates/247366.pem
	I1210 07:04:22.516672  292755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247366.pem
	I1210 07:04:22.528095  292755 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:04:22.546833  292755 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2473662.pem
	I1210 07:04:22.565972  292755 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2473662.pem /etc/ssl/certs/2473662.pem
	I1210 07:04:22.587012  292755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2473662.pem
	I1210 07:04:22.594431  292755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:59 /usr/share/ca-certificates/2473662.pem
	I1210 07:04:22.594517  292755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2473662.pem
	I1210 07:04:22.605013  292755 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:04:22.624743  292755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:04:22.633317  292755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:04:22.645280  292755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:04:22.657399  292755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:04:22.668942  292755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:04:22.677382  292755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:04:22.686955  292755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:04:22.696232  292755 kubeadm.go:401] StartCluster: {Name:pause-179913 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 Cl
usterName:pause-179913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.172 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:04:22.696425  292755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:04:22.696518  292755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:04:22.745814  292755 cri.go:89] found id: "f9ae0c7983bb05aad73287267b4fe1dec8f7753c19691f036ae5eb30c9a53e71"
	I1210 07:04:22.745847  292755 cri.go:89] found id: "7d19e592c4f7ec5a3e5ebe3d5a7146ad75c9185238096e24895d73cb4cecea08"
	I1210 07:04:22.745855  292755 cri.go:89] found id: "8d319ed6655edf622e672cad35dae95b38725e61f86172552ed36c899e5f0b86"
	I1210 07:04:22.745861  292755 cri.go:89] found id: "874494df45e73978d6466ad43ce8a528791399e10087dc8d328fa73f04738b76"
	I1210 07:04:22.745885  292755 cri.go:89] found id: "2b5b8e8231d3f0aa4bc5cd8b1c4043229635508a8120cf502ac221f9dff9e9cc"
	I1210 07:04:22.745893  292755 cri.go:89] found id: "1edd69d2810bce69fcfa37e6d911b6ef0ca221e589da9070b85d1fe9098e2ccf"
	I1210 07:04:22.745897  292755 cri.go:89] found id: "9a505c393ad4bd1f6d08058dd609aa430b798d0d1740809e6e0d02d90276cf72"
	I1210 07:04:22.745901  292755 cri.go:89] found id: "a2fc7915c20dfdb8bb4611274eab27e0d17396781df0436e02557ef3ca06ecde"
	I1210 07:04:22.745906  292755 cri.go:89] found id: "58cdb2885469a944f7d131b9f54a16b652759762831b1bbb4cc3ac983e401c7c"
	I1210 07:04:22.745920  292755 cri.go:89] found id: "5f4f6daacd15e949fa713a4a2a38733fee1647b73de0c30075050127f3d4c00c"
	I1210 07:04:22.745925  292755 cri.go:89] found id: "c74742d28710cbe2ed4b0c916372828a6807c251c55a619f9db490dbc044d098"
	I1210 07:04:22.745929  292755 cri.go:89] found id: ""
	I1210 07:04:22.745993  292755 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-179913 -n pause-179913
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-179913 logs -n 25
E1210 07:06:00.545034  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-179913 logs -n 25: (1.976144423s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────────────
──┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────────────
──┤
	│ ssh     │ -p cilium-714139 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo containerd config dump                                                                                                                                                                                                │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo crio config                                                                                                                                                                                                           │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ delete  │ -p cilium-714139                                                                                                                                                                                                                            │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │ 10 Dec 25 07:02 UTC │
	│ start   │ -p guest-539425 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                                     │ guest-539425              │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │ 10 Dec 25 07:03 UTC │
	│ start   │ -p cert-expiration-198346 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                                        │ cert-expiration-198346    │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │ 10 Dec 25 07:04 UTC │
	│ delete  │ -p force-systemd-env-909953                                                                                                                                                                                                                 │ force-systemd-env-909953  │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │ 10 Dec 25 07:03 UTC │
	│ start   │ -p force-systemd-flag-302211 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                   │ force-systemd-flag-302211 │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │ 10 Dec 25 07:04 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-511706 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                 │ running-upgrade-511706    │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │                     │
	│ start   │ -p pause-179913 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-179913              │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │ 10 Dec 25 07:05 UTC │
	│ delete  │ -p running-upgrade-511706                                                                                                                                                                                                                   │ running-upgrade-511706    │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │ 10 Dec 25 07:03 UTC │
	│ start   │ -p cert-options-977501 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-977501       │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │ 10 Dec 25 07:05 UTC │
	│ ssh     │ force-systemd-flag-302211 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                        │ force-systemd-flag-302211 │ jenkins │ v1.37.0 │ 10 Dec 25 07:04 UTC │ 10 Dec 25 07:04 UTC │
	│ delete  │ -p force-systemd-flag-302211                                                                                                                                                                                                                │ force-systemd-flag-302211 │ jenkins │ v1.37.0 │ 10 Dec 25 07:04 UTC │ 10 Dec 25 07:04 UTC │
	│ start   │ -p old-k8s-version-508835 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-508835    │ jenkins │ v1.37.0 │ 10 Dec 25 07:04 UTC │                     │
	│ ssh     │ cert-options-977501 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-977501       │ jenkins │ v1.37.0 │ 10 Dec 25 07:05 UTC │ 10 Dec 25 07:05 UTC │
	│ ssh     │ -p cert-options-977501 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-977501       │ jenkins │ v1.37.0 │ 10 Dec 25 07:05 UTC │ 10 Dec 25 07:05 UTC │
	│ delete  │ -p cert-options-977501                                                                                                                                                                                                                      │ cert-options-977501       │ jenkins │ v1.37.0 │ 10 Dec 25 07:05 UTC │ 10 Dec 25 07:05 UTC │
	│ start   │ -p no-preload-548860 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-548860         │ jenkins │ v1.37.0 │ 10 Dec 25 07:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────────────
──┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:05:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:05:16.206314  294110 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:05:16.206578  294110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:05:16.206587  294110 out.go:374] Setting ErrFile to fd 2...
	I1210 07:05:16.206591  294110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:05:16.206801  294110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 07:05:16.207315  294110 out.go:368] Setting JSON to false
	I1210 07:05:16.208514  294110 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31663,"bootTime":1765318653,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 07:05:16.208614  294110 start.go:143] virtualization: kvm guest
	I1210 07:05:16.210825  294110 out.go:179] * [no-preload-548860] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 07:05:16.212276  294110 notify.go:221] Checking for updates...
	I1210 07:05:16.212297  294110 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:05:16.214009  294110 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:05:16.215560  294110 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 07:05:16.217267  294110 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 07:05:16.218913  294110 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 07:05:16.220546  294110 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:05:16.222559  294110 config.go:182] Loaded profile config "cert-expiration-198346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:05:16.222726  294110 config.go:182] Loaded profile config "guest-539425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1210 07:05:16.222962  294110 config.go:182] Loaded profile config "old-k8s-version-508835": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 07:05:16.223227  294110 config.go:182] Loaded profile config "pause-179913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:05:16.223392  294110 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:05:16.270545  294110 out.go:179] * Using the kvm2 driver based on user configuration
	I1210 07:05:16.272051  294110 start.go:309] selected driver: kvm2
	I1210 07:05:16.272070  294110 start.go:927] validating driver "kvm2" against <nil>
	I1210 07:05:16.272085  294110 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:05:16.272836  294110 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:05:16.273115  294110 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:05:16.273144  294110 cni.go:84] Creating CNI manager for ""
	I1210 07:05:16.273205  294110 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 07:05:16.273220  294110 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 07:05:16.273298  294110 start.go:353] cluster config:
	{Name:no-preload-548860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-548860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:05:16.273429  294110 iso.go:125] acquiring lock: {Name:mkd598cf63ca899d26ff5ae5308f8a58215a80b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.275396  294110 out.go:179] * Starting "no-preload-548860" primary control-plane node in "no-preload-548860" cluster
	I1210 07:05:16.276945  294110 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 07:05:16.277096  294110 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/config.json ...
	I1210 07:05:16.277139  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/config.json: {Name:mka274ecd1a9c088a679326196f01e5af9e1ec92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:16.277331  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:16.277365  294110 start.go:360] acquireMachinesLock for no-preload-548860: {Name:mk2161deb194f56aae2b0559c12fd0eb56fd317d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 07:05:16.277431  294110 start.go:364] duration metric: took 30.345µs to acquireMachinesLock for "no-preload-548860"
	I1210 07:05:16.277461  294110 start.go:93] Provisioning new machine with config: &{Name:no-preload-548860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-548860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:05:16.277541  294110 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 07:05:14.066553  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:14.066601  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:14.066627  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:16.074470  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:16.074514  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:16.074539  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:18.083725  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:18.083764  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:18.083793  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:16.808679  293764 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256657975s)
	I1210 07:05:16.808705  293764 crio.go:469] duration metric: took 2.256782266s to extract the tarball
	I1210 07:05:16.808714  293764 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 07:05:16.863654  293764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:05:16.915842  293764 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:05:16.915865  293764 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:05:16.915888  293764 kubeadm.go:935] updating node { 192.168.50.231 8443 v1.28.0 crio true true} ...
	I1210 07:05:16.915985  293764 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-508835 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-508835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:05:16.916062  293764 ssh_runner.go:195] Run: crio config
	I1210 07:05:16.971131  293764 cni.go:84] Creating CNI manager for ""
	I1210 07:05:16.971159  293764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 07:05:16.971185  293764 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:05:16.971227  293764 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.231 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-508835 NodeName:old-k8s-version-508835 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:05:16.971431  293764 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-508835"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:05:16.971514  293764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1210 07:05:16.988370  293764 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:05:16.988454  293764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:05:17.003395  293764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1210 07:05:17.027477  293764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:05:17.055549  293764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1210 07:05:17.082668  293764 ssh_runner.go:195] Run: grep 192.168.50.231	control-plane.minikube.internal$ /etc/hosts
	I1210 07:05:17.088894  293764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:05:17.108231  293764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:05:17.292912  293764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:05:17.336862  293764 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835 for IP: 192.168.50.231
	I1210 07:05:17.336898  293764 certs.go:195] generating shared ca certs ...
	I1210 07:05:17.336919  293764 certs.go:227] acquiring lock for ca certs: {Name:mk2c8c8bbc628186be8cfd9c613269482a34a061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.337120  293764 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key
	I1210 07:05:17.337182  293764 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key
	I1210 07:05:17.337197  293764 certs.go:257] generating profile certs ...
	I1210 07:05:17.337278  293764 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.key
	I1210 07:05:17.337312  293764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt with IP's: []
	I1210 07:05:17.397490  293764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt ...
	I1210 07:05:17.397522  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: {Name:mkfdd1331d0f05540e260f5cb03882408a7eed76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.397727  293764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.key ...
	I1210 07:05:17.397745  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.key: {Name:mk042496ce578626a444eeea6e0812d38d4d73dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.397868  293764 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.key.7cf19092
	I1210 07:05:17.397911  293764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.crt.7cf19092 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.231]
	I1210 07:05:17.578994  293764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.crt.7cf19092 ...
	I1210 07:05:17.579025  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.crt.7cf19092: {Name:mk3307c8ded08808c71b9d2a1a3f81e34a37cc0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.579206  293764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.key.7cf19092 ...
	I1210 07:05:17.579224  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.key.7cf19092: {Name:mk25ebc3af9d5f8d9057125864abfb2e61fd787b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.579333  293764 certs.go:382] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.crt.7cf19092 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.crt
	I1210 07:05:17.579422  293764 certs.go:386] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.key.7cf19092 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.key
	I1210 07:05:17.579505  293764 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.key
	I1210 07:05:17.579530  293764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.crt with IP's: []
	I1210 07:05:17.628159  293764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.crt ...
	I1210 07:05:17.628190  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.crt: {Name:mkae654fb4308c8a05a021a33322a845a1288052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.628379  293764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.key ...
	I1210 07:05:17.628411  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.key: {Name:mk602569a094af04941e3424c21f68e5bf6eb2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.628633  293764 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366.pem (1338 bytes)
	W1210 07:05:17.628688  293764 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366_empty.pem, impossibly tiny 0 bytes
	I1210 07:05:17.628705  293764 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:05:17.628743  293764 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:05:17.628783  293764 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:05:17.628819  293764 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem (1675 bytes)
	I1210 07:05:17.628892  293764 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem (1708 bytes)
	I1210 07:05:17.629565  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:05:17.673120  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:05:17.708293  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:05:17.747930  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:05:17.789028  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 07:05:17.833224  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:05:17.869711  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:05:17.904498  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:05:17.943073  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem --> /usr/share/ca-certificates/2473662.pem (1708 bytes)
	I1210 07:05:17.992682  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:05:18.033787  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366.pem --> /usr/share/ca-certificates/247366.pem (1338 bytes)
	I1210 07:05:18.088542  293764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:05:18.134417  293764 ssh_runner.go:195] Run: openssl version
	I1210 07:05:18.145761  293764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2473662.pem
	I1210 07:05:18.166082  293764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2473662.pem /etc/ssl/certs/2473662.pem
	I1210 07:05:18.185359  293764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2473662.pem
	I1210 07:05:18.193258  293764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:59 /usr/share/ca-certificates/2473662.pem
	I1210 07:05:18.193334  293764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2473662.pem
	I1210 07:05:18.205065  293764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:05:18.222449  293764 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2473662.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:05:18.245985  293764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:18.265317  293764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:05:18.283092  293764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:18.291584  293764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:18.291670  293764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:18.301559  293764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:05:18.321136  293764 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:05:18.338572  293764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/247366.pem
	I1210 07:05:18.355927  293764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/247366.pem /etc/ssl/certs/247366.pem
	I1210 07:05:18.373116  293764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247366.pem
	I1210 07:05:18.381640  293764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:59 /usr/share/ca-certificates/247366.pem
	I1210 07:05:18.381729  293764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247366.pem
	I1210 07:05:18.398000  293764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:05:18.414669  293764 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/247366.pem /etc/ssl/certs/51391683.0
	I1210 07:05:18.433027  293764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:05:18.442915  293764 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:05:18.442986  293764 kubeadm.go:401] StartCluster: {Name:old-k8s-version-508835 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.0 ClusterName:old-k8s-version-508835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.231 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:05:18.443095  293764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:05:18.443164  293764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:05:18.497505  293764 cri.go:89] found id: ""
	I1210 07:05:18.497596  293764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:05:18.512434  293764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:05:18.527144  293764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:05:18.542932  293764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:05:18.542952  293764 kubeadm.go:158] found existing configuration files:
	
	I1210 07:05:18.543005  293764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:05:18.555920  293764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:05:18.555982  293764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:05:18.571181  293764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:05:18.586661  293764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:05:18.586762  293764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:05:18.603660  293764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:05:18.618857  293764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:05:18.618982  293764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:05:18.634729  293764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:05:18.648027  293764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:05:18.648104  293764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:05:18.661926  293764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 07:05:18.736922  293764 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1210 07:05:18.737021  293764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:05:18.962809  293764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:05:18.963026  293764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:05:18.963163  293764 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 07:05:19.181013  293764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:05:19.189016  293764 out.go:252]   - Generating certificates and keys ...
	I1210 07:05:19.189154  293764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:05:19.189251  293764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:05:19.386451  293764 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:05:19.621125  293764 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:05:19.791435  293764 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:05:16.279403  294110 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1210 07:05:16.279590  294110 start.go:159] libmachine.API.Create for "no-preload-548860" (driver="kvm2")
	I1210 07:05:16.279625  294110 client.go:173] LocalClient.Create starting
	I1210 07:05:16.279692  294110 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem
	I1210 07:05:16.279743  294110 main.go:143] libmachine: Decoding PEM data...
	I1210 07:05:16.279780  294110 main.go:143] libmachine: Parsing certificate...
	I1210 07:05:16.279846  294110 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem
	I1210 07:05:16.279916  294110 main.go:143] libmachine: Decoding PEM data...
	I1210 07:05:16.279937  294110 main.go:143] libmachine: Parsing certificate...
	I1210 07:05:16.280321  294110 main.go:143] libmachine: creating domain...
	I1210 07:05:16.280334  294110 main.go:143] libmachine: creating network...
	I1210 07:05:16.282108  294110 main.go:143] libmachine: found existing default network
	I1210 07:05:16.282425  294110 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 07:05:16.283573  294110 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:1f:09} reservation:<nil>}
	I1210 07:05:16.284920  294110 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:46:b2:1d} reservation:<nil>}
	I1210 07:05:16.285961  294110 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:72:7b:73} reservation:<nil>}
	I1210 07:05:16.287386  294110 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cad7d0}
	I1210 07:05:16.287520  294110 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-no-preload-548860</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 07:05:16.294751  294110 main.go:143] libmachine: creating private network mk-no-preload-548860 192.168.72.0/24...
	I1210 07:05:16.398048  294110 main.go:143] libmachine: private network mk-no-preload-548860 192.168.72.0/24 created
	I1210 07:05:16.398398  294110 main.go:143] libmachine: <network>
	  <name>mk-no-preload-548860</name>
	  <uuid>cf93ddd6-ad43-4377-b4c8-115a8ae19c44</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:21:ec:98'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 07:05:16.398443  294110 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860 ...
	I1210 07:05:16.398497  294110 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22094-243461/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1210 07:05:16.398511  294110 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 07:05:16.398590  294110 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22094-243461/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22094-243461/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1210 07:05:16.456159  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:16.627626  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:16.678007  294110 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/id_rsa...
	I1210 07:05:16.806580  294110 cache.go:107] acquiring lock: {Name:mk929cf02fc539c0a3ba415ba856603e7a2db9a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806629  294110 cache.go:107] acquiring lock: {Name:mke47ba0cdabea58510a295512b0c545824c6ac1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806636  294110 cache.go:107] acquiring lock: {Name:mk2eb8700e86825bc25abe1ba8e089f6daac20f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806586  294110 cache.go:107] acquiring lock: {Name:mk2d5c3355eb914434f77fe8a549e7e27d61d8ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806638  294110 cache.go:107] acquiring lock: {Name:mk5be695d928909e19606bdb32e31778a0102505 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806679  294110 cache.go:107] acquiring lock: {Name:mkf0a6fde87f6be7fe6d187eadd0116c7e80851c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806775  294110 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:05:16.806591  294110 cache.go:107] acquiring lock: {Name:mkb531d6d5f9ca51fa23c93b7dbb7d49c9f9871f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806793  294110 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 234.679µs
	I1210 07:05:16.806585  294110 cache.go:107] acquiring lock: {Name:mk8a2b7c7103ad9b74ce0f1af971a5d8da1c8f6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806806  294110 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:05:16.806842  294110 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:16.806871  294110 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:16.806949  294110 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:16.807007  294110 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:16.807027  294110 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:16.807161  294110 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:05:16.807171  294110 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 614.196µs
	I1210 07:05:16.807180  294110 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:05:16.807188  294110 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:16.808352  294110 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:16.808351  294110 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:16.808350  294110 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:16.808423  294110 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:16.808351  294110 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:16.809199  294110 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:16.847569  294110 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/no-preload-548860.rawdisk...
	I1210 07:05:16.847626  294110 main.go:143] libmachine: Writing magic tar header
	I1210 07:05:16.847656  294110 main.go:143] libmachine: Writing SSH key tar header
	I1210 07:05:16.847751  294110 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860 ...
	I1210 07:05:16.847836  294110 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860
	I1210 07:05:16.847870  294110 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860 (perms=drwx------)
	I1210 07:05:16.847907  294110 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube/machines
	I1210 07:05:16.847924  294110 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube/machines (perms=drwxr-xr-x)
	I1210 07:05:16.847941  294110 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 07:05:16.847954  294110 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube (perms=drwxr-xr-x)
	I1210 07:05:16.847966  294110 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461
	I1210 07:05:16.847979  294110 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461 (perms=drwxrwxr-x)
	I1210 07:05:16.847992  294110 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1210 07:05:16.848012  294110 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 07:05:16.848026  294110 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1210 07:05:16.848037  294110 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 07:05:16.848053  294110 main.go:143] libmachine: checking permissions on dir: /home
	I1210 07:05:16.848067  294110 main.go:143] libmachine: skipping /home - not owner
	I1210 07:05:16.848076  294110 main.go:143] libmachine: defining domain...
	I1210 07:05:16.849697  294110 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>no-preload-548860</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/no-preload-548860.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-no-preload-548860'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1210 07:05:16.855284  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:30:24:f8 in network default
	I1210 07:05:16.855990  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:16.856011  294110 main.go:143] libmachine: starting domain...
	I1210 07:05:16.856015  294110 main.go:143] libmachine: ensuring networks are active...
	I1210 07:05:16.856813  294110 main.go:143] libmachine: Ensuring network default is active
	I1210 07:05:16.857186  294110 main.go:143] libmachine: Ensuring network mk-no-preload-548860 is active
	I1210 07:05:16.857756  294110 main.go:143] libmachine: getting domain XML...
	I1210 07:05:16.858729  294110 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>no-preload-548860</name>
	  <uuid>8982359d-2b52-47d5-8872-36df279de441</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/no-preload-548860.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:6b:d2:9f'/>
	      <source network='mk-no-preload-548860'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:30:24:f8'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1210 07:05:16.965549  294110 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 07:05:16.970732  294110 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 07:05:16.995224  294110 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 07:05:17.023669  294110 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 07:05:17.025984  294110 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1210 07:05:17.045842  294110 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 07:05:17.585762  294110 cache.go:157] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 07:05:17.585791  294110 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 779.154973ms
	I1210 07:05:17.585802  294110 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 07:05:18.273163  294110 cache.go:157] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:05:18.273195  294110 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.466560078s
	I1210 07:05:18.273209  294110 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:05:18.349462  294110 cache.go:157] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 07:05:18.349510  294110 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 1.542880705s
	I1210 07:05:18.349529  294110 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 07:05:18.425650  294110 cache.go:157] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 07:05:18.425683  294110 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 1.619112587s
	I1210 07:05:18.425699  294110 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 07:05:18.432235  294110 cache.go:157] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 07:05:18.432265  294110 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.625715165s
	I1210 07:05:18.432277  294110 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 07:05:18.461488  294110 cache.go:157] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 07:05:18.461519  294110 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 1.654875909s
	I1210 07:05:18.461537  294110 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 07:05:18.461555  294110 cache.go:87] Successfully saved all images to host disk.
	I1210 07:05:18.895683  294110 main.go:143] libmachine: waiting for domain to start...
	I1210 07:05:18.897270  294110 main.go:143] libmachine: domain is now running
	I1210 07:05:18.897288  294110 main.go:143] libmachine: waiting for IP...
	I1210 07:05:18.898169  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:18.898970  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:18.898991  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:18.899531  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:18.899588  294110 retry.go:31] will retry after 261.109719ms: waiting for domain to come up
	I1210 07:05:19.162347  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:19.163226  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:19.163247  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:19.163714  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:19.163769  294110 retry.go:31] will retry after 255.06727ms: waiting for domain to come up
	I1210 07:05:19.420807  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:19.421772  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:19.421807  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:19.422359  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:19.422412  294110 retry.go:31] will retry after 384.138204ms: waiting for domain to come up
	I1210 07:05:19.808271  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:19.808950  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:19.808970  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:19.809384  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:19.809424  294110 retry.go:31] will retry after 559.345956ms: waiting for domain to come up
	I1210 07:05:20.370429  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:20.371329  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:20.371346  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:20.371852  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:20.371920  294110 retry.go:31] will retry after 697.632642ms: waiting for domain to come up
	I1210 07:05:21.071160  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:21.071941  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:21.071961  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:21.072379  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:21.072422  294110 retry.go:31] will retry after 652.843698ms: waiting for domain to come up
	I1210 07:05:20.084416  293764 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:05:20.312465  293764 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:05:20.312864  293764 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-508835] and IPs [192.168.50.231 127.0.0.1 ::1]
	I1210 07:05:20.622637  293764 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:05:20.622867  293764 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-508835] and IPs [192.168.50.231 127.0.0.1 ::1]
	I1210 07:05:20.814447  293764 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:05:21.253410  293764 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:05:21.548631  293764 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:05:21.549285  293764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:05:21.685611  293764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:05:21.828322  293764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:05:21.962352  293764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:05:22.281291  293764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:05:22.281762  293764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:05:22.284704  293764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:05:20.095051  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:20.095096  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:20.095123  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:22.102130  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:22.102168  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:22.102195  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:22.287443  293764 out.go:252]   - Booting up control plane ...
	I1210 07:05:22.287579  293764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:05:22.287681  293764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:05:22.287776  293764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:05:22.306297  293764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:05:22.307275  293764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:05:22.307367  293764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:05:22.500236  293764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 07:05:21.727180  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:21.727957  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:21.727984  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:21.728483  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:21.728533  294110 retry.go:31] will retry after 1.171673249s: waiting for domain to come up
	I1210 07:05:22.902321  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:22.903157  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:22.903183  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:22.903638  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:22.903682  294110 retry.go:31] will retry after 1.024090759s: waiting for domain to come up
	I1210 07:05:23.929407  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:23.930288  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:23.930321  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:23.930735  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:23.930781  294110 retry.go:31] will retry after 1.64218528s: waiting for domain to come up
	I1210 07:05:25.574577  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:25.575352  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:25.575370  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:25.575900  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:25.575947  294110 retry.go:31] will retry after 2.176104411s: waiting for domain to come up
	I1210 07:05:24.109943  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:24.109984  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:24.110021  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:26.117316  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:26.117358  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:26.117384  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:28.124715  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:28.124754  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:28.124776  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:29.000968  293764 kubeadm.go:319] [apiclient] All control plane components are healthy after 6.504795 seconds
	I1210 07:05:29.001139  293764 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:05:29.020007  293764 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:05:29.557546  293764 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:05:29.557852  293764 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-508835 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:05:30.077738  293764 kubeadm.go:319] [bootstrap-token] Using token: ul6r04.zb9s25jrgemvvhyb
	I1210 07:05:30.079471  293764 out.go:252]   - Configuring RBAC rules ...
	I1210 07:05:30.079678  293764 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:05:30.089980  293764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:05:30.106937  293764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:05:30.117459  293764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:05:30.124786  293764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:05:30.128962  293764 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:05:30.155466  293764 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:05:30.510419  293764 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:05:30.577855  293764 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:05:30.581050  293764 kubeadm.go:319] 
	I1210 07:05:30.581150  293764 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:05:30.581157  293764 kubeadm.go:319] 
	I1210 07:05:30.581266  293764 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:05:30.581317  293764 kubeadm.go:319] 
	I1210 07:05:30.581376  293764 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:05:30.581475  293764 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:05:30.581571  293764 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:05:30.581582  293764 kubeadm.go:319] 
	I1210 07:05:30.581692  293764 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:05:30.581718  293764 kubeadm.go:319] 
	I1210 07:05:30.581800  293764 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:05:30.581832  293764 kubeadm.go:319] 
	I1210 07:05:30.581942  293764 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:05:30.582055  293764 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:05:30.582163  293764 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:05:30.582173  293764 kubeadm.go:319] 
	I1210 07:05:30.582320  293764 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:05:30.582435  293764 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:05:30.582446  293764 kubeadm.go:319] 
	I1210 07:05:30.582555  293764 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ul6r04.zb9s25jrgemvvhyb \
	I1210 07:05:30.582686  293764 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:3fdb6ac7c26d35942b0591aa9d148ccb3b5b969098be653ce61486bf954c5d36 \
	I1210 07:05:30.582716  293764 kubeadm.go:319] 	--control-plane 
	I1210 07:05:30.582721  293764 kubeadm.go:319] 
	I1210 07:05:30.582826  293764 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:05:30.582834  293764 kubeadm.go:319] 
	I1210 07:05:30.583018  293764 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ul6r04.zb9s25jrgemvvhyb \
	I1210 07:05:30.583153  293764 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:3fdb6ac7c26d35942b0591aa9d148ccb3b5b969098be653ce61486bf954c5d36 
	I1210 07:05:30.586305  293764 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:05:30.586343  293764 cni.go:84] Creating CNI manager for ""
	I1210 07:05:30.586353  293764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 07:05:30.588272  293764 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 07:05:27.754268  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:27.755161  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:27.755206  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:27.755672  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:27.755721  294110 retry.go:31] will retry after 1.809505749s: waiting for domain to come up
	I1210 07:05:29.567691  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:29.568456  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:29.568472  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:29.568951  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:29.568991  294110 retry.go:31] will retry after 2.58197786s: waiting for domain to come up
	I1210 07:05:30.132497  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:30.132543  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:30.132570  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:32.140647  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:32.140680  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:32.140704  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:30.589945  293764 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 07:05:30.622758  293764 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 07:05:30.721185  293764 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:05:30.721352  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:30.721363  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-508835 minikube.k8s.io/updated_at=2025_12_10T07_05_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=old-k8s-version-508835 minikube.k8s.io/primary=true
	I1210 07:05:30.843717  293764 ops.go:34] apiserver oom_adj: -16
	I1210 07:05:30.996234  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:31.497128  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:31.996377  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:32.496973  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:32.996691  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:33.497137  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:33.997045  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:34.497323  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:32.152841  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:32.153546  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:32.153563  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:32.154058  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:32.154099  294110 retry.go:31] will retry after 3.614054475s: waiting for domain to come up
	I1210 07:05:35.769329  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:35.770129  294110 main.go:143] libmachine: domain no-preload-548860 has current primary IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:35.770148  294110 main.go:143] libmachine: found domain IP: 192.168.72.64
	I1210 07:05:35.770157  294110 main.go:143] libmachine: reserving static IP address...
	I1210 07:05:35.770598  294110 main.go:143] libmachine: unable to find host DHCP lease matching {name: "no-preload-548860", mac: "52:54:00:6b:d2:9f", ip: "192.168.72.64"} in network mk-no-preload-548860
	I1210 07:05:36.064219  294110 main.go:143] libmachine: reserved static IP address 192.168.72.64 for domain no-preload-548860
	I1210 07:05:36.064255  294110 main.go:143] libmachine: waiting for SSH...
	I1210 07:05:36.064264  294110 main.go:143] libmachine: Getting to WaitForSSH function...
	I1210 07:05:36.068846  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.069443  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.069481  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.069761  294110 main.go:143] libmachine: Using SSH client type: native
	I1210 07:05:36.070242  294110 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I1210 07:05:36.070262  294110 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1210 07:05:36.194127  294110 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:05:36.194505  294110 main.go:143] libmachine: domain creation complete
	I1210 07:05:36.196345  294110 machine.go:94] provisionDockerMachine start ...
	I1210 07:05:36.199346  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.199849  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.199901  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.200133  294110 main.go:143] libmachine: Using SSH client type: native
	I1210 07:05:36.200358  294110 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I1210 07:05:36.200373  294110 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:05:34.147765  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:34.147863  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:34.147937  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:36.154382  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:36.154428  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:36.154448  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:37.822940  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": EOF
	I1210 07:05:37.823010  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:37.830354  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": read tcp 192.168.83.1:53560->192.168.83.172:8443: read: connection reset by peer
	I1210 07:05:37.998713  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:37.999558  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": dial tcp 192.168.83.172:8443: connect: connection refused
	I1210 07:05:36.320325  294110 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 07:05:36.320358  294110 buildroot.go:166] provisioning hostname "no-preload-548860"
	I1210 07:05:36.323439  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.323998  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.324042  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.324294  294110 main.go:143] libmachine: Using SSH client type: native
	I1210 07:05:36.324621  294110 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I1210 07:05:36.324638  294110 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-548860 && echo "no-preload-548860" | sudo tee /etc/hostname
	I1210 07:05:36.467740  294110 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-548860
	
	I1210 07:05:36.471153  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.471580  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.471611  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.471836  294110 main.go:143] libmachine: Using SSH client type: native
	I1210 07:05:36.472095  294110 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I1210 07:05:36.472118  294110 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-548860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-548860/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-548860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:05:36.611324  294110 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:05:36.611366  294110 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22094-243461/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-243461/.minikube}
	I1210 07:05:36.611401  294110 buildroot.go:174] setting up certificates
	I1210 07:05:36.611417  294110 provision.go:84] configureAuth start
	I1210 07:05:36.614604  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.615203  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.615233  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.618501  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.618944  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.618970  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.619115  294110 provision.go:143] copyHostCerts
	I1210 07:05:36.619197  294110 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-243461/.minikube/ca.pem, removing ...
	I1210 07:05:36.619211  294110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-243461/.minikube/ca.pem
	I1210 07:05:36.619300  294110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/ca.pem (1082 bytes)
	I1210 07:05:36.619438  294110 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-243461/.minikube/cert.pem, removing ...
	I1210 07:05:36.619454  294110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-243461/.minikube/cert.pem
	I1210 07:05:36.619498  294110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/cert.pem (1123 bytes)
	I1210 07:05:36.619560  294110 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-243461/.minikube/key.pem, removing ...
	I1210 07:05:36.619567  294110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-243461/.minikube/key.pem
	I1210 07:05:36.619592  294110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/key.pem (1675 bytes)
	I1210 07:05:36.619648  294110 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem org=jenkins.no-preload-548860 san=[127.0.0.1 192.168.72.64 localhost minikube no-preload-548860]
	I1210 07:05:36.760391  294110 provision.go:177] copyRemoteCerts
	I1210 07:05:36.760469  294110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:05:36.763680  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.764184  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.764213  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.764397  294110 sshutil.go:53] new ssh client: &{IP:192.168.72.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/id_rsa Username:docker}
	I1210 07:05:36.856922  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:05:36.893044  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:05:36.928738  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:05:36.964204  294110 provision.go:87] duration metric: took 352.757093ms to configureAuth
	I1210 07:05:36.964237  294110 buildroot.go:189] setting minikube options for container-runtime
	I1210 07:05:36.964430  294110 config.go:182] Loaded profile config "no-preload-548860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 07:05:36.967548  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.968165  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.968208  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.968503  294110 main.go:143] libmachine: Using SSH client type: native
	I1210 07:05:36.968758  294110 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I1210 07:05:36.968785  294110 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:05:37.251523  294110 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:05:37.251565  294110 machine.go:97] duration metric: took 1.055200039s to provisionDockerMachine
	I1210 07:05:37.251578  294110 client.go:176] duration metric: took 20.971943282s to LocalClient.Create
	I1210 07:05:37.251598  294110 start.go:167] duration metric: took 20.972007767s to libmachine.API.Create "no-preload-548860"
	I1210 07:05:37.251608  294110 start.go:293] postStartSetup for "no-preload-548860" (driver="kvm2")
	I1210 07:05:37.251624  294110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:05:37.251696  294110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:05:37.255235  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.255793  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:37.255824  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.256086  294110 sshutil.go:53] new ssh client: &{IP:192.168.72.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/id_rsa Username:docker}
	I1210 07:05:37.351083  294110 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:05:37.357031  294110 info.go:137] Remote host: Buildroot 2025.02
	I1210 07:05:37.357064  294110 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-243461/.minikube/addons for local assets ...
	I1210 07:05:37.357161  294110 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-243461/.minikube/files for local assets ...
	I1210 07:05:37.357284  294110 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem -> 2473662.pem in /etc/ssl/certs
	I1210 07:05:37.357413  294110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:05:37.373661  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem --> /etc/ssl/certs/2473662.pem (1708 bytes)
	I1210 07:05:37.405354  294110 start.go:296] duration metric: took 153.724982ms for postStartSetup
	I1210 07:05:37.409794  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.410400  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:37.410429  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.410756  294110 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/config.json ...
	I1210 07:05:37.411066  294110 start.go:128] duration metric: took 21.133509001s to createHost
	I1210 07:05:37.417978  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.418703  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:37.418741  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.419052  294110 main.go:143] libmachine: Using SSH client type: native
	I1210 07:05:37.419288  294110 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I1210 07:05:37.419305  294110 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 07:05:37.541300  294110 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765350337.501184110
	
	I1210 07:05:37.541335  294110 fix.go:216] guest clock: 1765350337.501184110
	I1210 07:05:37.541347  294110 fix.go:229] Guest: 2025-12-10 07:05:37.50118411 +0000 UTC Remote: 2025-12-10 07:05:37.411081074 +0000 UTC m=+21.271682045 (delta=90.103036ms)
	I1210 07:05:37.541372  294110 fix.go:200] guest clock delta is within tolerance: 90.103036ms
	I1210 07:05:37.541380  294110 start.go:83] releasing machines lock for "no-preload-548860", held for 21.263935615s
	I1210 07:05:37.544476  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.544857  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:37.544898  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.545575  294110 ssh_runner.go:195] Run: cat /version.json
	I1210 07:05:37.545690  294110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:05:37.549192  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.549200  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.549795  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:37.549824  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.549982  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:37.550024  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.550030  294110 sshutil.go:53] new ssh client: &{IP:192.168.72.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/id_rsa Username:docker}
	I1210 07:05:37.550253  294110 sshutil.go:53] new ssh client: &{IP:192.168.72.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/id_rsa Username:docker}
	I1210 07:05:37.662325  294110 ssh_runner.go:195] Run: systemctl --version
	I1210 07:05:37.669309  294110 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:05:37.841819  294110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:05:37.850301  294110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:05:37.850398  294110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:05:37.874908  294110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:05:37.874942  294110 start.go:496] detecting cgroup driver to use...
	I1210 07:05:37.875016  294110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:05:37.896475  294110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:05:37.914318  294110 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:05:37.914399  294110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:05:37.934942  294110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:05:37.953295  294110 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:05:38.114852  294110 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:05:38.332612  294110 docker.go:234] disabling docker service ...
	I1210 07:05:38.332689  294110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:05:38.353540  294110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:05:38.373202  294110 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:05:38.536677  294110 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:05:38.702119  294110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:05:38.720186  294110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:05:38.749869  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:38.909781  294110 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:05:38.909851  294110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:38.924629  294110 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:05:38.924705  294110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:38.938319  294110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:38.952483  294110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:38.966337  294110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:05:38.980908  294110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:38.995766  294110 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:39.021247  294110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:39.036099  294110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:05:39.048446  294110 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 07:05:39.048519  294110 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 07:05:39.078291  294110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:05:39.095974  294110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:05:39.258006  294110 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:05:39.394593  294110 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:05:39.394667  294110 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:05:39.401389  294110 start.go:564] Will wait 60s for crictl version
	I1210 07:05:39.401458  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:39.406185  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 07:05:39.450035  294110 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 07:05:39.450133  294110 ssh_runner.go:195] Run: crio --version
	I1210 07:05:39.484666  294110 ssh_runner.go:195] Run: crio --version
	I1210 07:05:39.522170  294110 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.29.1 ...
	I1210 07:05:34.997014  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:35.497136  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:35.997184  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:36.496582  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:36.997154  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:37.497025  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:37.996845  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:38.497334  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:38.996365  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:39.497183  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:39.527366  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:39.527839  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:39.527897  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:39.528144  294110 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1210 07:05:39.534832  294110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:05:39.553608  294110 kubeadm.go:884] updating cluster {Name:no-preload-548860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-rc.1 ClusterName:no-preload-548860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.64 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:05:39.553816  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:39.706508  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:39.858358  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:40.007040  294110 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 07:05:40.007109  294110 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:05:40.047504  294110 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1210 07:05:40.047544  294110 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:05:40.047599  294110 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:40.047942  294110 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:05:40.047970  294110 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.048145  294110 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.048187  294110 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.048309  294110 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.047941  294110 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.048314  294110 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.049994  294110 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.050022  294110 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:40.050023  294110 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.050060  294110 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.050112  294110 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.050128  294110 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.049994  294110 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:05:40.051472  294110 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.173671  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.174561  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.174933  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.182455  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.191008  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.200370  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:05:40.245011  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.329945  294110 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc" in container runtime
	I1210 07:05:40.330007  294110 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.330070  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.381884  294110 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2" in container runtime
	I1210 07:05:40.381904  294110 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1210 07:05:40.381941  294110 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.381941  294110 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.381971  294110 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce" in container runtime
	I1210 07:05:40.381999  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.381999  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.381999  294110 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.382052  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.391682  294110 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614" in container runtime
	I1210 07:05:40.391727  294110 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 07:05:40.391740  294110 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.391760  294110 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:05:40.391813  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.391827  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.407956  294110 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a" in container runtime
	I1210 07:05:40.408034  294110 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.408072  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.408089  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.408129  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.408150  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.408190  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.408250  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:05:40.408269  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.426342  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.541129  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.541255  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.550329  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.550497  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.550562  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.582450  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:05:40.607731  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.719268  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.719288  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.719320  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.719358  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.722312  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.722315  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:05:40.750853  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.895331  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 07:05:40.895410  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 07:05:40.895432  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1210 07:05:40.895455  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:05:40.895461  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 07:05:40.895534  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 07:05:40.895554  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1210 07:05:40.895567  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 07:05:40.895534  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 07:05:40.895534  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1210 07:05:40.895624  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 07:05:40.895643  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:05:40.895714  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 07:05:40.895807  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 07:05:40.917789  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 07:05:40.917839  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (23144960 bytes)
	I1210 07:05:40.917977  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 07:05:40.918002  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (17248256 bytes)
	I1210 07:05:40.918069  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:05:40.918081  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1210 07:05:40.918097  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (23653376 bytes)
	I1210 07:05:40.918098  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 07:05:40.918148  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 07:05:40.918161  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1210 07:05:40.918239  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 07:05:40.918262  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (25791488 bytes)
	I1210 07:05:40.918331  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 07:05:40.918353  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (27697152 bytes)
	I1210 07:05:41.001453  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:41.101612  294110 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:05:41.101699  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1210 07:05:39.997138  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:40.496354  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:40.997062  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:41.496285  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:41.996480  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:42.497147  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:42.996269  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:43.278335  293764 kubeadm.go:1114] duration metric: took 12.557086571s to wait for elevateKubeSystemPrivileges
	I1210 07:05:43.278384  293764 kubeadm.go:403] duration metric: took 24.835405162s to StartCluster
	I1210 07:05:43.278412  293764 settings.go:142] acquiring lock: {Name:mkfd19ecbf4d1e6319f3bb5fd2369931dc469304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:43.278502  293764 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 07:05:43.280525  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/kubeconfig: {Name:mk89e62df614d075d4d9ba9b9215d18e6c14ff0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:43.280848  293764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:05:43.280848  293764 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.231 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:05:43.281055  293764 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:05:43.281187  293764 config.go:182] Loaded profile config "old-k8s-version-508835": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 07:05:43.281186  293764 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-508835"
	I1210 07:05:43.281213  293764 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-508835"
	I1210 07:05:43.281245  293764 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-508835"
	I1210 07:05:43.281248  293764 host.go:66] Checking if "old-k8s-version-508835" exists ...
	I1210 07:05:43.281261  293764 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-508835"
	I1210 07:05:43.283500  293764 out.go:179] * Verifying Kubernetes components...
	I1210 07:05:43.285470  293764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:05:43.285530  293764 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:38.499407  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:38.500131  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": dial tcp 192.168.83.172:8443: connect: connection refused
	I1210 07:05:38.999583  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:39.000263  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": dial tcp 192.168.83.172:8443: connect: connection refused
	I1210 07:05:39.499612  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:39.500439  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": dial tcp 192.168.83.172:8443: connect: connection refused
	I1210 07:05:39.998972  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:39.999688  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": dial tcp 192.168.83.172:8443: connect: connection refused
	I1210 07:05:40.499088  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:43.066862  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 07:05:43.066951  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 07:05:43.066973  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:43.110804  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 07:05:43.110846  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 07:05:43.286991  293764 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:05:43.287057  293764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:05:43.287012  293764 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-508835"
	I1210 07:05:43.287211  293764 host.go:66] Checking if "old-k8s-version-508835" exists ...
	I1210 07:05:43.291267  293764 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:05:43.291354  293764 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:05:43.293396  293764 main.go:143] libmachine: domain old-k8s-version-508835 has defined MAC address 52:54:00:8d:96:98 in network mk-old-k8s-version-508835
	I1210 07:05:43.294706  293764 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:96:98", ip: ""} in network mk-old-k8s-version-508835: {Iface:virbr2 ExpiryTime:2025-12-10 08:05:06 +0000 UTC Type:0 Mac:52:54:00:8d:96:98 Iaid: IPaddr:192.168.50.231 Prefix:24 Hostname:old-k8s-version-508835 Clientid:01:52:54:00:8d:96:98}
	I1210 07:05:43.294753  293764 main.go:143] libmachine: domain old-k8s-version-508835 has defined IP address 192.168.50.231 and MAC address 52:54:00:8d:96:98 in network mk-old-k8s-version-508835
	I1210 07:05:43.295354  293764 sshutil.go:53] new ssh client: &{IP:192.168.50.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/old-k8s-version-508835/id_rsa Username:docker}
	I1210 07:05:43.296944  293764 main.go:143] libmachine: domain old-k8s-version-508835 has defined MAC address 52:54:00:8d:96:98 in network mk-old-k8s-version-508835
	I1210 07:05:43.297560  293764 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:96:98", ip: ""} in network mk-old-k8s-version-508835: {Iface:virbr2 ExpiryTime:2025-12-10 08:05:06 +0000 UTC Type:0 Mac:52:54:00:8d:96:98 Iaid: IPaddr:192.168.50.231 Prefix:24 Hostname:old-k8s-version-508835 Clientid:01:52:54:00:8d:96:98}
	I1210 07:05:43.297596  293764 main.go:143] libmachine: domain old-k8s-version-508835 has defined IP address 192.168.50.231 and MAC address 52:54:00:8d:96:98 in network mk-old-k8s-version-508835
	I1210 07:05:43.297822  293764 sshutil.go:53] new ssh client: &{IP:192.168.50.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/old-k8s-version-508835/id_rsa Username:docker}
	I1210 07:05:43.849526  293764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:05:43.849531  293764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:05:44.053928  293764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:05:44.121429  293764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:05:43.499340  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:43.507135  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:43.507180  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:43.999517  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:44.014644  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:44.014685  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:44.499401  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:44.508865  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:44.508906  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:44.999629  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:45.006592  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 200:
	ok
	I1210 07:05:45.015651  292755 api_server.go:141] control plane version: v1.34.3
	I1210 07:05:45.015698  292755 api_server.go:131] duration metric: took 57.517173318s to wait for apiserver health ...
	I1210 07:05:45.015720  292755 cni.go:84] Creating CNI manager for ""
	I1210 07:05:45.015733  292755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 07:05:45.017816  292755 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 07:05:45.019229  292755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 07:05:45.040143  292755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 07:05:45.070653  292755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:05:45.079911  292755 system_pods.go:59] 7 kube-system pods found
	I1210 07:05:45.079967  292755 system_pods.go:61] "coredns-66bc5c9577-nnm25" [e290831e-e710-4fc6-9170-9661176ac06f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:45.079978  292755 system_pods.go:61] "coredns-66bc5c9577-qcwf4" [3c3003a3-50a1-4248-a19f-458f0c14923b] Running
	I1210 07:05:45.079990  292755 system_pods.go:61] "etcd-pause-179913" [35c0270f-1f5b-4021-b115-868d55375c8b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:05:45.080009  292755 system_pods.go:61] "kube-apiserver-pause-179913" [43bb2255-bccb-446b-831c-368f2cc51f12] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:05:45.080022  292755 system_pods.go:61] "kube-controller-manager-pause-179913" [88a9da74-5d42-4281-8035-8b888b98724e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:05:45.080033  292755 system_pods.go:61] "kube-proxy-rvmnw" [9d3714c1-16f4-4178-9896-49713556e897] Running
	I1210 07:05:45.080042  292755 system_pods.go:61] "kube-scheduler-pause-179913" [7be03420-22a0-4b6b-832f-5867a771f911] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:05:45.080056  292755 system_pods.go:74] duration metric: took 9.364751ms to wait for pod list to return data ...
	I1210 07:05:45.080070  292755 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:05:45.087191  292755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 07:05:45.087231  292755 node_conditions.go:123] node cpu capacity is 2
	I1210 07:05:45.087262  292755 node_conditions.go:105] duration metric: took 7.181183ms to run NodePressure ...
	I1210 07:05:45.087377  292755 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:05:45.521663  292755 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1210 07:05:45.529152  292755 kubeadm.go:744] kubelet initialised
	I1210 07:05:45.529189  292755 kubeadm.go:745] duration metric: took 7.492666ms waiting for restarted kubelet to initialise ...
	I1210 07:05:45.529220  292755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:05:45.551110  292755 ops.go:34] apiserver oom_adj: -16
	I1210 07:05:45.551148  292755 kubeadm.go:602] duration metric: took 1m22.696983533s to restartPrimaryControlPlane
	I1210 07:05:45.551164  292755 kubeadm.go:403] duration metric: took 1m22.85495137s to StartCluster
	I1210 07:05:45.551190  292755 settings.go:142] acquiring lock: {Name:mkfd19ecbf4d1e6319f3bb5fd2369931dc469304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:45.551288  292755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 07:05:45.553273  292755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/kubeconfig: {Name:mk89e62df614d075d4d9ba9b9215d18e6c14ff0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:45.553697  292755 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.172 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:05:45.553844  292755 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:05:45.554079  292755 config.go:182] Loaded profile config "pause-179913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:05:45.555625  292755 out.go:179] * Verifying Kubernetes components...
	I1210 07:05:45.555642  292755 out.go:179] * Enabled addons: 
	I1210 07:05:41.238442  294110 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 07:05:41.238506  294110 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:41.238576  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:41.727941  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1210 07:05:41.728004  294110 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 07:05:41.728072  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 07:05:41.728076  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:44.443584  294110 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (2.715474141s)
	I1210 07:05:44.443633  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1210 07:05:44.443649  294110 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.715498363s)
	I1210 07:05:44.443667  294110 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 07:05:44.443732  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:44.443735  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 07:05:46.088976  293764 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.239317287s)
	I1210 07:05:46.089017  293764 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.239431401s)
	I1210 07:05:46.089055  293764 start.go:977] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1210 07:05:46.090367  293764 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-508835" to be "Ready" ...
	I1210 07:05:46.136096  293764 node_ready.go:49] node "old-k8s-version-508835" is "Ready"
	I1210 07:05:46.136142  293764 node_ready.go:38] duration metric: took 45.736627ms for node "old-k8s-version-508835" to be "Ready" ...
	I1210 07:05:46.136186  293764 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:05:46.136276  293764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:05:46.596354  293764 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-508835" context rescaled to 1 replicas
	I1210 07:05:46.628544  293764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.50706843s)
	I1210 07:05:46.628577  293764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.574595426s)
	I1210 07:05:46.628629  293764 api_server.go:72] duration metric: took 3.347732942s to wait for apiserver process to appear ...
	I1210 07:05:46.628648  293764 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:05:46.628672  293764 api_server.go:253] Checking apiserver healthz at https://192.168.50.231:8443/healthz ...
	I1210 07:05:46.645038  293764 api_server.go:279] https://192.168.50.231:8443/healthz returned 200:
	ok
	I1210 07:05:46.647241  293764 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 07:05:45.557500  292755 addons.go:530] duration metric: took 3.655629ms for enable addons: enabled=[]
	I1210 07:05:45.557511  292755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:05:45.868638  292755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:05:45.904126  292755 node_ready.go:35] waiting up to 6m0s for node "pause-179913" to be "Ready" ...
	I1210 07:05:45.911742  292755 node_ready.go:49] node "pause-179913" is "Ready"
	I1210 07:05:45.911782  292755 node_ready.go:38] duration metric: took 7.609763ms for node "pause-179913" to be "Ready" ...
	I1210 07:05:45.911803  292755 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:05:45.911894  292755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:05:45.939769  292755 api_server.go:72] duration metric: took 386.01566ms to wait for apiserver process to appear ...
	I1210 07:05:45.939810  292755 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:05:45.939836  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:45.946763  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 200:
	ok
	I1210 07:05:45.948279  292755 api_server.go:141] control plane version: v1.34.3
	I1210 07:05:45.948308  292755 api_server.go:131] duration metric: took 8.488476ms to wait for apiserver health ...
	I1210 07:05:45.948322  292755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:05:45.955449  292755 system_pods.go:59] 7 kube-system pods found
	I1210 07:05:45.955490  292755 system_pods.go:61] "coredns-66bc5c9577-nnm25" [e290831e-e710-4fc6-9170-9661176ac06f] Running
	I1210 07:05:45.955498  292755 system_pods.go:61] "coredns-66bc5c9577-qcwf4" [3c3003a3-50a1-4248-a19f-458f0c14923b] Running
	I1210 07:05:45.955507  292755 system_pods.go:61] "etcd-pause-179913" [35c0270f-1f5b-4021-b115-868d55375c8b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:05:45.955514  292755 system_pods.go:61] "kube-apiserver-pause-179913" [43bb2255-bccb-446b-831c-368f2cc51f12] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:05:45.955528  292755 system_pods.go:61] "kube-controller-manager-pause-179913" [88a9da74-5d42-4281-8035-8b888b98724e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:05:45.955533  292755 system_pods.go:61] "kube-proxy-rvmnw" [9d3714c1-16f4-4178-9896-49713556e897] Running
	I1210 07:05:45.955542  292755 system_pods.go:61] "kube-scheduler-pause-179913" [7be03420-22a0-4b6b-832f-5867a771f911] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:05:45.955552  292755 system_pods.go:74] duration metric: took 7.220379ms to wait for pod list to return data ...
	I1210 07:05:45.955569  292755 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:05:45.963188  292755 default_sa.go:45] found service account: "default"
	I1210 07:05:45.963230  292755 default_sa.go:55] duration metric: took 7.652247ms for default service account to be created ...
	I1210 07:05:45.963246  292755 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:05:45.968360  292755 system_pods.go:86] 7 kube-system pods found
	I1210 07:05:45.968396  292755 system_pods.go:89] "coredns-66bc5c9577-nnm25" [e290831e-e710-4fc6-9170-9661176ac06f] Running
	I1210 07:05:45.968406  292755 system_pods.go:89] "coredns-66bc5c9577-qcwf4" [3c3003a3-50a1-4248-a19f-458f0c14923b] Running
	I1210 07:05:45.968417  292755 system_pods.go:89] "etcd-pause-179913" [35c0270f-1f5b-4021-b115-868d55375c8b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:05:45.968427  292755 system_pods.go:89] "kube-apiserver-pause-179913" [43bb2255-bccb-446b-831c-368f2cc51f12] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:05:45.968439  292755 system_pods.go:89] "kube-controller-manager-pause-179913" [88a9da74-5d42-4281-8035-8b888b98724e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:05:45.968444  292755 system_pods.go:89] "kube-proxy-rvmnw" [9d3714c1-16f4-4178-9896-49713556e897] Running
	I1210 07:05:45.968453  292755 system_pods.go:89] "kube-scheduler-pause-179913" [7be03420-22a0-4b6b-832f-5867a771f911] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:05:45.968464  292755 system_pods.go:126] duration metric: took 5.209005ms to wait for k8s-apps to be running ...
	I1210 07:05:45.968474  292755 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:05:45.968540  292755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:05:45.996916  292755 system_svc.go:56] duration metric: took 28.425073ms WaitForService to wait for kubelet
	I1210 07:05:45.996951  292755 kubeadm.go:587] duration metric: took 443.205596ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:05:45.996969  292755 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:05:46.002477  292755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 07:05:46.002514  292755 node_conditions.go:123] node cpu capacity is 2
	I1210 07:05:46.002541  292755 node_conditions.go:105] duration metric: took 5.566098ms to run NodePressure ...
	I1210 07:05:46.002562  292755 start.go:242] waiting for startup goroutines ...
	I1210 07:05:46.002573  292755 start.go:247] waiting for cluster config update ...
	I1210 07:05:46.002584  292755 start.go:256] writing updated cluster config ...
	I1210 07:05:46.003020  292755 ssh_runner.go:195] Run: rm -f paused
	I1210 07:05:46.013646  292755 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:05:46.014701  292755 kapi.go:59] client config for pause-179913: &rest.Config{Host:"https://192.168.83.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-243461/.minikube/profiles/pause-179913/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-243461/.minikube/profiles/pause-179913/client.key", CAFile:"/home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:05:46.019617  292755 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nnm25" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:46.028451  292755 pod_ready.go:94] pod "coredns-66bc5c9577-nnm25" is "Ready"
	I1210 07:05:46.028495  292755 pod_ready.go:86] duration metric: took 8.845469ms for pod "coredns-66bc5c9577-nnm25" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:46.028511  292755 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qcwf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:46.045384  292755 pod_ready.go:94] pod "coredns-66bc5c9577-qcwf4" is "Ready"
	I1210 07:05:46.045431  292755 pod_ready.go:86] duration metric: took 16.910963ms for pod "coredns-66bc5c9577-qcwf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:46.050198  292755 pod_ready.go:83] waiting for pod "etcd-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:05:48.060020  292755 pod_ready.go:104] pod "etcd-pause-179913" is not "Ready", error: <nil>
	I1210 07:05:46.647365  293764 api_server.go:141] control plane version: v1.28.0
	I1210 07:05:46.647402  293764 api_server.go:131] duration metric: took 18.739825ms to wait for apiserver health ...
	I1210 07:05:46.647416  293764 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:05:46.648561  293764 addons.go:530] duration metric: took 3.367506503s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 07:05:46.654059  293764 system_pods.go:59] 8 kube-system pods found
	I1210 07:05:46.654121  293764 system_pods.go:61] "coredns-5dd5756b68-bl5vd" [64e50760-ecaf-446d-b6e6-65a28a8484c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:46.654137  293764 system_pods.go:61] "coredns-5dd5756b68-gj27p" [c51a8f37-b1ba-4d1e-9af8-b33b4c20009e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:46.654146  293764 system_pods.go:61] "etcd-old-k8s-version-508835" [b8fadbf6-8e6c-47ac-8392-a48900a6b56f] Running
	I1210 07:05:46.654152  293764 system_pods.go:61] "kube-apiserver-old-k8s-version-508835" [4a755f71-dcf8-4954-8e7f-9992863006af] Running
	I1210 07:05:46.654159  293764 system_pods.go:61] "kube-controller-manager-old-k8s-version-508835" [3f001d72-13ca-48fc-b8a4-a03abc797ece] Running
	I1210 07:05:46.654178  293764 system_pods.go:61] "kube-proxy-d2m7p" [fe67c773-3e41-4687-b8cf-e180ee486e76] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:05:46.654188  293764 system_pods.go:61] "kube-scheduler-old-k8s-version-508835" [45cf3d01-ceb3-4e1a-8704-3a3d9ec53721] Running
	I1210 07:05:46.654195  293764 system_pods.go:61] "storage-provisioner" [15fa6465-6c07-452f-b9b1-3284b48b4d20] Pending
	I1210 07:05:46.654206  293764 system_pods.go:74] duration metric: took 6.781989ms to wait for pod list to return data ...
	I1210 07:05:46.654221  293764 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:05:46.664597  293764 default_sa.go:45] found service account: "default"
	I1210 07:05:46.664647  293764 default_sa.go:55] duration metric: took 10.414638ms for default service account to be created ...
	I1210 07:05:46.664664  293764 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:05:46.670689  293764 system_pods.go:86] 8 kube-system pods found
	I1210 07:05:46.670748  293764 system_pods.go:89] "coredns-5dd5756b68-bl5vd" [64e50760-ecaf-446d-b6e6-65a28a8484c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:46.670759  293764 system_pods.go:89] "coredns-5dd5756b68-gj27p" [c51a8f37-b1ba-4d1e-9af8-b33b4c20009e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:46.670770  293764 system_pods.go:89] "etcd-old-k8s-version-508835" [b8fadbf6-8e6c-47ac-8392-a48900a6b56f] Running
	I1210 07:05:46.670778  293764 system_pods.go:89] "kube-apiserver-old-k8s-version-508835" [4a755f71-dcf8-4954-8e7f-9992863006af] Running
	I1210 07:05:46.670784  293764 system_pods.go:89] "kube-controller-manager-old-k8s-version-508835" [3f001d72-13ca-48fc-b8a4-a03abc797ece] Running
	I1210 07:05:46.670793  293764 system_pods.go:89] "kube-proxy-d2m7p" [fe67c773-3e41-4687-b8cf-e180ee486e76] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:05:46.670799  293764 system_pods.go:89] "kube-scheduler-old-k8s-version-508835" [45cf3d01-ceb3-4e1a-8704-3a3d9ec53721] Running
	I1210 07:05:46.670808  293764 system_pods.go:89] "storage-provisioner" [15fa6465-6c07-452f-b9b1-3284b48b4d20] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:05:46.670836  293764 retry.go:31] will retry after 275.201281ms: missing components: kube-dns, kube-proxy
	I1210 07:05:46.958117  293764 system_pods.go:86] 8 kube-system pods found
	I1210 07:05:46.958171  293764 system_pods.go:89] "coredns-5dd5756b68-bl5vd" [64e50760-ecaf-446d-b6e6-65a28a8484c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:46.958185  293764 system_pods.go:89] "coredns-5dd5756b68-gj27p" [c51a8f37-b1ba-4d1e-9af8-b33b4c20009e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:46.958201  293764 system_pods.go:89] "etcd-old-k8s-version-508835" [b8fadbf6-8e6c-47ac-8392-a48900a6b56f] Running
	I1210 07:05:46.958210  293764 system_pods.go:89] "kube-apiserver-old-k8s-version-508835" [4a755f71-dcf8-4954-8e7f-9992863006af] Running
	I1210 07:05:46.958217  293764 system_pods.go:89] "kube-controller-manager-old-k8s-version-508835" [3f001d72-13ca-48fc-b8a4-a03abc797ece] Running
	I1210 07:05:46.958226  293764 system_pods.go:89] "kube-proxy-d2m7p" [fe67c773-3e41-4687-b8cf-e180ee486e76] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:05:46.958238  293764 system_pods.go:89] "kube-scheduler-old-k8s-version-508835" [45cf3d01-ceb3-4e1a-8704-3a3d9ec53721] Running
	I1210 07:05:46.958248  293764 system_pods.go:89] "storage-provisioner" [15fa6465-6c07-452f-b9b1-3284b48b4d20] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:05:46.958274  293764 retry.go:31] will retry after 291.602863ms: missing components: kube-dns, kube-proxy
	I1210 07:05:47.257153  293764 system_pods.go:86] 8 kube-system pods found
	I1210 07:05:47.257204  293764 system_pods.go:89] "coredns-5dd5756b68-bl5vd" [64e50760-ecaf-446d-b6e6-65a28a8484c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:47.257220  293764 system_pods.go:89] "coredns-5dd5756b68-gj27p" [c51a8f37-b1ba-4d1e-9af8-b33b4c20009e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:47.257232  293764 system_pods.go:89] "etcd-old-k8s-version-508835" [b8fadbf6-8e6c-47ac-8392-a48900a6b56f] Running
	I1210 07:05:47.257239  293764 system_pods.go:89] "kube-apiserver-old-k8s-version-508835" [4a755f71-dcf8-4954-8e7f-9992863006af] Running
	I1210 07:05:47.257245  293764 system_pods.go:89] "kube-controller-manager-old-k8s-version-508835" [3f001d72-13ca-48fc-b8a4-a03abc797ece] Running
	I1210 07:05:47.257253  293764 system_pods.go:89] "kube-proxy-d2m7p" [fe67c773-3e41-4687-b8cf-e180ee486e76] Running
	I1210 07:05:47.257257  293764 system_pods.go:89] "kube-scheduler-old-k8s-version-508835" [45cf3d01-ceb3-4e1a-8704-3a3d9ec53721] Running
	I1210 07:05:47.257275  293764 system_pods.go:89] "storage-provisioner" [15fa6465-6c07-452f-b9b1-3284b48b4d20] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:05:47.257290  293764 system_pods.go:126] duration metric: took 592.616953ms to wait for k8s-apps to be running ...
	I1210 07:05:47.257305  293764 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:05:47.257367  293764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:05:47.287194  293764 system_svc.go:56] duration metric: took 29.877591ms WaitForService to wait for kubelet
	I1210 07:05:47.287236  293764 kubeadm.go:587] duration metric: took 4.006339822s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:05:47.287301  293764 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:05:47.291237  293764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 07:05:47.291274  293764 node_conditions.go:123] node cpu capacity is 2
	I1210 07:05:47.291295  293764 node_conditions.go:105] duration metric: took 3.977687ms to run NodePressure ...
	I1210 07:05:47.291313  293764 start.go:242] waiting for startup goroutines ...
	I1210 07:05:47.291325  293764 start.go:247] waiting for cluster config update ...
	I1210 07:05:47.291345  293764 start.go:256] writing updated cluster config ...
	I1210 07:05:47.291764  293764 ssh_runner.go:195] Run: rm -f paused
	I1210 07:05:47.300101  293764 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:05:47.307229  293764 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bl5vd" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:05:49.316287  293764 pod_ready.go:104] pod "coredns-5dd5756b68-bl5vd" is not "Ready", error: <nil>
	I1210 07:05:46.636984  294110 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (2.19313391s)
	I1210 07:05:46.637028  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 07:05:46.637041  294110 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.193276255s)
	I1210 07:05:46.637076  294110 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:05:46.637121  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:46.637140  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:05:48.624381  294110 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.987206786s)
	I1210 07:05:48.624426  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 07:05:48.624420  294110 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.987278214s)
	I1210 07:05:48.624460  294110 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 07:05:48.624464  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 07:05:48.624504  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 07:05:48.624554  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:05:50.297711  294110 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.673133719s)
	I1210 07:05:50.297749  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:05:50.297772  294110 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.6732374s)
	I1210 07:05:50.297811  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1210 07:05:50.297860  294110 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1210 07:05:50.297775  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 07:05:50.297985  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1210 07:05:49.558669  292755 pod_ready.go:94] pod "etcd-pause-179913" is "Ready"
	I1210 07:05:49.558712  292755 pod_ready.go:86] duration metric: took 3.508461655s for pod "etcd-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:49.570978  292755 pod_ready.go:83] waiting for pod "kube-apiserver-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:05:51.579940  292755 pod_ready.go:104] pod "kube-apiserver-pause-179913" is not "Ready", error: <nil>
	W1210 07:05:51.816055  293764 pod_ready.go:104] pod "coredns-5dd5756b68-bl5vd" is not "Ready", error: <nil>
	W1210 07:05:54.314524  293764 pod_ready.go:104] pod "coredns-5dd5756b68-bl5vd" is not "Ready", error: <nil>
	I1210 07:05:52.974191  294110 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (2.676158812s)
	I1210 07:05:52.974233  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1210 07:05:52.974289  294110 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 07:05:52.974364  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 07:05:55.032742  294110 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (2.058339316s)
	I1210 07:05:55.032788  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1210 07:05:55.032820  294110 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:05:55.032905  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:05:55.790037  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 07:05:55.790084  294110 cache_images.go:125] Successfully loaded all cached images
	I1210 07:05:55.790090  294110 cache_images.go:94] duration metric: took 15.742526615s to LoadCachedImages
	I1210 07:05:55.790106  294110 kubeadm.go:935] updating node { 192.168.72.64 8443 v1.35.0-rc.1 crio true true} ...
	I1210 07:05:55.790215  294110 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-548860 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-548860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:05:55.790313  294110 ssh_runner.go:195] Run: crio config
	I1210 07:05:55.848041  294110 cni.go:84] Creating CNI manager for ""
	I1210 07:05:55.848090  294110 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 07:05:55.848120  294110 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:05:55.848153  294110 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.64 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-548860 NodeName:no-preload-548860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:05:55.848333  294110 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-548860"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.64"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.64"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:05:55.848441  294110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:05:55.864032  294110 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 07:05:55.864106  294110 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:05:55.880330  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:55.880429  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl.sha256
	I1210 07:05:55.880448  294110 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet
	I1210 07:05:55.880502  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 07:05:55.880525  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 07:05:55.887791  294110 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 07:05:55.887836  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (58597560 bytes)
	I1210 07:05:55.888154  294110 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 07:05:55.888216  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (72368312 bytes)
	W1210 07:05:54.084084  292755 pod_ready.go:104] pod "kube-apiserver-pause-179913" is not "Ready", error: <nil>
	W1210 07:05:56.579205  292755 pod_ready.go:104] pod "kube-apiserver-pause-179913" is not "Ready", error: <nil>
	I1210 07:05:56.632450  294110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:05:56.651761  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 07:05:56.657644  294110 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 07:05:56.657694  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (58110244 bytes)
	I1210 07:05:56.977184  294110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:05:56.994355  294110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1210 07:05:57.020230  294110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:05:57.044542  294110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1210 07:05:57.071132  294110 ssh_runner.go:195] Run: grep 192.168.72.64	control-plane.minikube.internal$ /etc/hosts
	I1210 07:05:57.078585  294110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:05:57.100719  294110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:05:57.273820  294110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:05:57.313810  294110 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860 for IP: 192.168.72.64
	I1210 07:05:57.313836  294110 certs.go:195] generating shared ca certs ...
	I1210 07:05:57.313857  294110 certs.go:227] acquiring lock for ca certs: {Name:mk2c8c8bbc628186be8cfd9c613269482a34a061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.314101  294110 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key
	I1210 07:05:57.314149  294110 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key
	I1210 07:05:57.314162  294110 certs.go:257] generating profile certs ...
	I1210 07:05:57.314240  294110 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.key
	I1210 07:05:57.314255  294110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.crt with IP's: []
	I1210 07:05:57.386914  294110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.crt ...
	I1210 07:05:57.386948  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.crt: {Name:mk466ed40cc010bd20e3989ea8bea4b4ef4cd073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.387140  294110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.key ...
	I1210 07:05:57.387151  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.key: {Name:mk8760fbb8db80630c1d9e63702eb572aa8256a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.387237  294110 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.key.e8899670
	I1210 07:05:57.387252  294110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.crt.e8899670 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.64]
	I1210 07:05:57.414920  294110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.crt.e8899670 ...
	I1210 07:05:57.414950  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.crt.e8899670: {Name:mk6430d3e58dc86adff6ff0de0dd0fefac0b0a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.415123  294110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.key.e8899670 ...
	I1210 07:05:57.415139  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.key.e8899670: {Name:mkae2382e0450b2cd3c8cfb56e9465e4c1b5ae33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.415223  294110 certs.go:382] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.crt.e8899670 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.crt
	I1210 07:05:57.415295  294110 certs.go:386] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.key.e8899670 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.key
	I1210 07:05:57.415353  294110 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.key
	I1210 07:05:57.415369  294110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.crt with IP's: []
	I1210 07:05:57.539953  294110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.crt ...
	I1210 07:05:57.539986  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.crt: {Name:mkaf5ae2e0f916e1d768e22c989c83a2b243ccc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.540173  294110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.key ...
	I1210 07:05:57.540196  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.key: {Name:mk41dd84cb64b6d4f260f1ee218c4c81b62b6b32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.540381  294110 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366.pem (1338 bytes)
	W1210 07:05:57.540441  294110 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366_empty.pem, impossibly tiny 0 bytes
	I1210 07:05:57.540457  294110 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:05:57.540505  294110 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:05:57.540560  294110 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:05:57.540601  294110 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem (1675 bytes)
	I1210 07:05:57.540664  294110 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem (1708 bytes)
	I1210 07:05:57.541340  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:05:57.579801  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:05:57.618035  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:05:57.653947  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:05:57.688863  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:05:57.724149  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:05:57.759600  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:05:57.794961  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:05:57.831752  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem --> /usr/share/ca-certificates/2473662.pem (1708 bytes)
	I1210 07:05:57.870309  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:05:57.906615  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366.pem --> /usr/share/ca-certificates/247366.pem (1338 bytes)
	I1210 07:05:57.942207  294110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:05:57.966558  294110 ssh_runner.go:195] Run: openssl version
	I1210 07:05:57.974433  294110 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/247366.pem
	I1210 07:05:57.989402  294110 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/247366.pem /etc/ssl/certs/247366.pem
	I1210 07:05:58.004911  294110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247366.pem
	I1210 07:05:58.011298  294110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:59 /usr/share/ca-certificates/247366.pem
	I1210 07:05:58.011367  294110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247366.pem
	I1210 07:05:58.023069  294110 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:05:58.041279  294110 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/247366.pem /etc/ssl/certs/51391683.0
	I1210 07:05:58.060692  294110 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2473662.pem
	I1210 07:05:58.078676  294110 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2473662.pem /etc/ssl/certs/2473662.pem
	I1210 07:05:58.093713  294110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2473662.pem
	I1210 07:05:58.100033  294110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:59 /usr/share/ca-certificates/2473662.pem
	I1210 07:05:58.100101  294110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2473662.pem
	I1210 07:05:58.107912  294110 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:05:58.120976  294110 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2473662.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:05:58.134042  294110 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:58.147702  294110 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:05:58.163857  294110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:58.169936  294110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:58.170004  294110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:58.177959  294110 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:05:58.193848  294110 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:05:58.206987  294110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:05:58.212249  294110 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:05:58.212311  294110 kubeadm.go:401] StartCluster: {Name:no-preload-548860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-rc.1 ClusterName:no-preload-548860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.64 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:05:58.212384  294110 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:05:58.212453  294110 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:05:58.260948  294110 cri.go:89] found id: ""
	I1210 07:05:58.261030  294110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:05:58.274813  294110 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:05:58.288739  294110 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:05:58.301950  294110 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:05:58.301977  294110 kubeadm.go:158] found existing configuration files:
	
	I1210 07:05:58.302032  294110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:05:58.314615  294110 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:05:58.314707  294110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:05:58.328609  294110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:05:58.340359  294110 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:05:58.340422  294110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:05:58.355231  294110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:05:58.368650  294110 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:05:58.368708  294110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:05:58.382022  294110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:05:58.395441  294110 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:05:58.395528  294110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:05:58.410563  294110 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 07:05:58.470728  294110 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:05:58.470783  294110 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:05:58.647776  294110 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:05:58.648000  294110 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:05:58.648167  294110 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:05:58.672800  294110 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1210 07:05:58.579566  292755 pod_ready.go:104] pod "kube-apiserver-pause-179913" is not "Ready", error: <nil>
	I1210 07:05:59.577705  292755 pod_ready.go:94] pod "kube-apiserver-pause-179913" is "Ready"
	I1210 07:05:59.577735  292755 pod_ready.go:86] duration metric: took 10.006723147s for pod "kube-apiserver-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.580798  292755 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.585440  292755 pod_ready.go:94] pod "kube-controller-manager-pause-179913" is "Ready"
	I1210 07:05:59.585465  292755 pod_ready.go:86] duration metric: took 4.642896ms for pod "kube-controller-manager-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.588130  292755 pod_ready.go:83] waiting for pod "kube-proxy-rvmnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.592715  292755 pod_ready.go:94] pod "kube-proxy-rvmnw" is "Ready"
	I1210 07:05:59.592750  292755 pod_ready.go:86] duration metric: took 4.593394ms for pod "kube-proxy-rvmnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.594803  292755 pod_ready.go:83] waiting for pod "kube-scheduler-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.775853  292755 pod_ready.go:94] pod "kube-scheduler-pause-179913" is "Ready"
	I1210 07:05:59.775899  292755 pod_ready.go:86] duration metric: took 181.065401ms for pod "kube-scheduler-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.775916  292755 pod_ready.go:40] duration metric: took 13.762227751s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:05:59.828436  292755 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 07:05:59.830479  292755 out.go:179] * Done! kubectl is now configured to use "pause-179913" cluster and "default" namespace by default
	W1210 07:05:56.319387  293764 pod_ready.go:104] pod "coredns-5dd5756b68-bl5vd" is not "Ready", error: <nil>
	W1210 07:05:58.814459  293764 pod_ready.go:104] pod "coredns-5dd5756b68-bl5vd" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.588857024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765350360588834113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:94590,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e26a01c-fb33-497f-81d5-46dffe46baac name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.589906065Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35d4454c-6f94-4b37-879e-00cb6a6d42e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.589977583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35d4454c-6f94-4b37-879e-00cb6a6d42e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.590281748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3f82629cddb72732962082ac5437805942f88e41649e039cb4d0c94e5fe9869,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765350343586873794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb74559cc46038b162194361f011fcc18ddb25a581d26c9989eba05f2e2ac410,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343626062885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c72d4cfa68cc80aa133569590e8d26ba690044cc5f7448f2a20dfc957858336,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343570691572,Labels:map[string]string{io.kubernetes.container.name:
coredns,io.kubernetes.pod.name: coredns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0fef2ee087eaf1645a535bd5e565cb7e4276e629e2a92bd310b9c565f388e7,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765350340069651864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:492bd86865d3bf6e3fd652b744da85dc827039833cf89b2d1c37c5c06e452d
94,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765350340057924659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:2efb4548b8c28445c05d98cdf3122b7a7acb919a34b0832806fc3bd6eafbd2b6,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765350339140891813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93385a5ea66dfb22d59cb4438f95b57031a2904e9201d0c5593ddb98cc2ff788,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765350281745635630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6458c094802a443cc98268f45c071f7ee251f8ddef7849f9d31a4d3bf71726,PodSandboxId:2061c7ea68bf89e14ee38cfad841b239b98398d15dd36b780a4584e80a2ee08e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765350277751476098,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0431ea1f78207f3fe3a3d98e4cf18589c6a270226e73b99835c88ee5df60b745,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1765350274734869551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ae0c7983bb05aad73287267b4fe1dec8f7753c19691f036ae5eb30c9a53e71,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261817030439,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d19e592c4f7ec5a3e5ebe3d5a7146ad75c9185238096e24895d73cb4cecea08,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc
9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261578952454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d319ed6655edf622e672cad35dae95b38725e61f86172552ed36c899e5f0b86,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765350260740092879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:874494df45e73978d6466ad43ce8a528791399e10087dc8d328fa73f04738b76,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765350260588773554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edd69d2810bce69fcfa37e6d911b6ef0ca221e589da9070b85d1fe9098e2ccf,PodSandboxId:de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765350258209516716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10
259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35d4454c-6f94-4b37-879e-00cb6a6d42e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.641233665Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a1ab7a1-b9f9-408a-bef9-ae456061b035 name=/runtime.v1.RuntimeService/Version
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.641343041Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a1ab7a1-b9f9-408a-bef9-ae456061b035 name=/runtime.v1.RuntimeService/Version
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.643019145Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38b2e20c-8b87-4ece-a4c4-70fde65bbb2f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.643355570Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765350360643335666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:94590,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38b2e20c-8b87-4ece-a4c4-70fde65bbb2f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.644442618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd2fa1b3-e224-4435-9c6c-fcdc2759c933 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.644500479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd2fa1b3-e224-4435-9c6c-fcdc2759c933 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.644874627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3f82629cddb72732962082ac5437805942f88e41649e039cb4d0c94e5fe9869,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765350343586873794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb74559cc46038b162194361f011fcc18ddb25a581d26c9989eba05f2e2ac410,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343626062885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c72d4cfa68cc80aa133569590e8d26ba690044cc5f7448f2a20dfc957858336,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343570691572,Labels:map[string]string{io.kubernetes.container.name:
coredns,io.kubernetes.pod.name: coredns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0fef2ee087eaf1645a535bd5e565cb7e4276e629e2a92bd310b9c565f388e7,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765350340069651864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:492bd86865d3bf6e3fd652b744da85dc827039833cf89b2d1c37c5c06e452d
94,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765350340057924659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:2efb4548b8c28445c05d98cdf3122b7a7acb919a34b0832806fc3bd6eafbd2b6,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765350339140891813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93385a5ea66dfb22d59cb4438f95b57031a2904e9201d0c5593ddb98cc2ff788,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765350281745635630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6458c094802a443cc98268f45c071f7ee251f8ddef7849f9d31a4d3bf71726,PodSandboxId:2061c7ea68bf89e14ee38cfad841b239b98398d15dd36b780a4584e80a2ee08e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765350277751476098,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0431ea1f78207f3fe3a3d98e4cf18589c6a270226e73b99835c88ee5df60b745,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1765350274734869551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ae0c7983bb05aad73287267b4fe1dec8f7753c19691f036ae5eb30c9a53e71,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261817030439,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d19e592c4f7ec5a3e5ebe3d5a7146ad75c9185238096e24895d73cb4cecea08,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc
9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261578952454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d319ed6655edf622e672cad35dae95b38725e61f86172552ed36c899e5f0b86,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765350260740092879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:874494df45e73978d6466ad43ce8a528791399e10087dc8d328fa73f04738b76,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765350260588773554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edd69d2810bce69fcfa37e6d911b6ef0ca221e589da9070b85d1fe9098e2ccf,PodSandboxId:de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765350258209516716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10
259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd2fa1b3-e224-4435-9c6c-fcdc2759c933 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.692894991Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a0da75f-6bfb-45db-84b4-bb2a4d5ac17d name=/runtime.v1.RuntimeService/Version
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.693203999Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a0da75f-6bfb-45db-84b4-bb2a4d5ac17d name=/runtime.v1.RuntimeService/Version
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.695018252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=774163f1-6539-43cd-8c48-0fd63644d58d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.695355246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765350360695334003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:94590,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=774163f1-6539-43cd-8c48-0fd63644d58d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.696382147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=113206b9-728c-4cfd-9085-4022027fb9ae name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.696459844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=113206b9-728c-4cfd-9085-4022027fb9ae name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.696857177Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3f82629cddb72732962082ac5437805942f88e41649e039cb4d0c94e5fe9869,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765350343586873794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb74559cc46038b162194361f011fcc18ddb25a581d26c9989eba05f2e2ac410,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343626062885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c72d4cfa68cc80aa133569590e8d26ba690044cc5f7448f2a20dfc957858336,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343570691572,Labels:map[string]string{io.kubernetes.container.name:
coredns,io.kubernetes.pod.name: coredns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0fef2ee087eaf1645a535bd5e565cb7e4276e629e2a92bd310b9c565f388e7,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765350340069651864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:492bd86865d3bf6e3fd652b744da85dc827039833cf89b2d1c37c5c06e452d
94,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765350340057924659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:2efb4548b8c28445c05d98cdf3122b7a7acb919a34b0832806fc3bd6eafbd2b6,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765350339140891813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93385a5ea66dfb22d59cb4438f95b57031a2904e9201d0c5593ddb98cc2ff788,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765350281745635630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6458c094802a443cc98268f45c071f7ee251f8ddef7849f9d31a4d3bf71726,PodSandboxId:2061c7ea68bf89e14ee38cfad841b239b98398d15dd36b780a4584e80a2ee08e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765350277751476098,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0431ea1f78207f3fe3a3d98e4cf18589c6a270226e73b99835c88ee5df60b745,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1765350274734869551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ae0c7983bb05aad73287267b4fe1dec8f7753c19691f036ae5eb30c9a53e71,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261817030439,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d19e592c4f7ec5a3e5ebe3d5a7146ad75c9185238096e24895d73cb4cecea08,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc
9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261578952454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d319ed6655edf622e672cad35dae95b38725e61f86172552ed36c899e5f0b86,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765350260740092879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:874494df45e73978d6466ad43ce8a528791399e10087dc8d328fa73f04738b76,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765350260588773554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edd69d2810bce69fcfa37e6d911b6ef0ca221e589da9070b85d1fe9098e2ccf,PodSandboxId:de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765350258209516716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10
259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=113206b9-728c-4cfd-9085-4022027fb9ae name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.744950019Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf45abd5-bdd6-468c-b50a-0074485fd0ed name=/runtime.v1.RuntimeService/Version
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.745155067Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf45abd5-bdd6-468c-b50a-0074485fd0ed name=/runtime.v1.RuntimeService/Version
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.746654365Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e686746-1eb4-4b34-a16d-25cfe9022997 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.746995768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765350360746975631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:94590,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e686746-1eb4-4b34-a16d-25cfe9022997 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.748378789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab1e329a-106b-424a-ad20-78bf5b57c015 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.748686360Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab1e329a-106b-424a-ad20-78bf5b57c015 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:00 pause-179913 crio[4102]: time="2025-12-10 07:06:00.749334160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3f82629cddb72732962082ac5437805942f88e41649e039cb4d0c94e5fe9869,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765350343586873794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb74559cc46038b162194361f011fcc18ddb25a581d26c9989eba05f2e2ac410,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343626062885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c72d4cfa68cc80aa133569590e8d26ba690044cc5f7448f2a20dfc957858336,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343570691572,Labels:map[string]string{io.kubernetes.container.name:
coredns,io.kubernetes.pod.name: coredns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0fef2ee087eaf1645a535bd5e565cb7e4276e629e2a92bd310b9c565f388e7,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765350340069651864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:492bd86865d3bf6e3fd652b744da85dc827039833cf89b2d1c37c5c06e452d
94,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765350340057924659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:2efb4548b8c28445c05d98cdf3122b7a7acb919a34b0832806fc3bd6eafbd2b6,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765350339140891813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93385a5ea66dfb22d59cb4438f95b57031a2904e9201d0c5593ddb98cc2ff788,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765350281745635630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6458c094802a443cc98268f45c071f7ee251f8ddef7849f9d31a4d3bf71726,PodSandboxId:2061c7ea68bf89e14ee38cfad841b239b98398d15dd36b780a4584e80a2ee08e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765350277751476098,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0431ea1f78207f3fe3a3d98e4cf18589c6a270226e73b99835c88ee5df60b745,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1765350274734869551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ae0c7983bb05aad73287267b4fe1dec8f7753c19691f036ae5eb30c9a53e71,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261817030439,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d19e592c4f7ec5a3e5ebe3d5a7146ad75c9185238096e24895d73cb4cecea08,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc
9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261578952454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d319ed6655edf622e672cad35dae95b38725e61f86172552ed36c899e5f0b86,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765350260740092879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:874494df45e73978d6466ad43ce8a528791399e10087dc8d328fa73f04738b76,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765350260588773554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edd69d2810bce69fcfa37e6d911b6ef0ca221e589da9070b85d1fe9098e2ccf,PodSandboxId:de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765350258209516716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10
259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab1e329a-106b-424a-ad20-78bf5b57c015 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	cb74559cc4603       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 seconds ago       Running             coredns                   2                   cd4b91455889c       coredns-66bc5c9577-qcwf4               kube-system
	d3f82629cddb7       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   17 seconds ago       Running             kube-proxy                2                   b9b370bea3b07       kube-proxy-rvmnw                       kube-system
	4c72d4cfa68cc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 seconds ago       Running             coredns                   2                   90fe1e0a09e57       coredns-66bc5c9577-nnm25               kube-system
	9d0fef2ee087e       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   20 seconds ago       Running             kube-controller-manager   3                   99b0eb3a927dd       kube-controller-manager-pause-179913   kube-system
	492bd86865d3b       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   20 seconds ago       Running             kube-apiserver            3                   fcc77b197ab7b       kube-apiserver-pause-179913            kube-system
	2efb4548b8c28       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   21 seconds ago       Running             etcd                      2                   eb5d314eec39e       etcd-pause-179913                      kube-system
	93385a5ea66df       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   About a minute ago   Exited              kube-controller-manager   2                   99b0eb3a927dd       kube-controller-manager-pause-179913   kube-system
	2d6458c094802       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   About a minute ago   Running             kube-scheduler            2                   2061c7ea68bf8       kube-scheduler-pause-179913            kube-system
	0431ea1f78207       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   About a minute ago   Exited              kube-apiserver            2                   fcc77b197ab7b       kube-apiserver-pause-179913            kube-system
	f9ae0c7983bb0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   1                   90fe1e0a09e57       coredns-66bc5c9577-nnm25               kube-system
	7d19e592c4f7e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   1                   cd4b91455889c       coredns-66bc5c9577-qcwf4               kube-system
	8d319ed6655ed       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   About a minute ago   Exited              kube-proxy                1                   b9b370bea3b07       kube-proxy-rvmnw                       kube-system
	874494df45e73       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Exited              etcd                      1                   eb5d314eec39e       etcd-pause-179913                      kube-system
	1edd69d2810bc       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   About a minute ago   Exited              kube-scheduler            1                   de62457ace834       kube-scheduler-pause-179913            kube-system
	
	
	==> coredns [4c72d4cfa68cc80aa133569590e8d26ba690044cc5f7448f2a20dfc957858336] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59511 - 30802 "HINFO IN 7029509918167748090.3066795933630391036. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.05938676s
	
	
	==> coredns [7d19e592c4f7ec5a3e5ebe3d5a7146ad75c9185238096e24895d73cb4cecea08] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46799 - 16884 "HINFO IN 6633913088188194778.3487392683332638165. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.464373846s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cb74559cc46038b162194361f011fcc18ddb25a581d26c9989eba05f2e2ac410] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43785 - 53564 "HINFO IN 4244757400304116886.3751084024964913518. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.452333468s
	
	
	==> coredns [f9ae0c7983bb05aad73287267b4fe1dec8f7753c19691f036ae5eb30c9a53e71] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:35110 - 44402 "HINFO IN 3304936518507635904.4878337891850190860. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.159391822s
	
	
	==> describe nodes <==
	Name:               pause-179913
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-179913
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=pause-179913
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T07_02_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 07:02:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-179913
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 07:05:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 07:05:43 +0000   Wed, 10 Dec 2025 07:02:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 07:05:43 +0000   Wed, 10 Dec 2025 07:02:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 07:05:43 +0000   Wed, 10 Dec 2025 07:02:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 07:05:43 +0000   Wed, 10 Dec 2025 07:02:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.172
	  Hostname:    pause-179913
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 63ad5a9c54c0437d9f8cf6c0f08657e6
	  System UUID:                63ad5a9c-54c0-437d-9f8c-f6c0f08657e6
	  Boot ID:                    1613e1a6-e262-4f53-82c2-9652eb7aa8b7
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-nnm25                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     3m5s
	  kube-system                 coredns-66bc5c9577-qcwf4                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     3m5s
	  kube-system                 etcd-pause-179913                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         3m10s
	  kube-system                 kube-apiserver-pause-179913             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m10s
	  kube-system                 kube-controller-manager-pause-179913    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m11s
	  kube-system                 kube-proxy-rvmnw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	  kube-system                 kube-scheduler-pause-179913             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             240Mi (8%)  340Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3m3s               kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m19s              kubelet          Node pause-179913 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m10s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m10s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m10s              kubelet          Node pause-179913 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m10s              kubelet          Node pause-179913 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m10s              kubelet          Node pause-179913 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m9s               kubelet          Node pause-179913 status is now: NodeReady
	  Normal  RegisteredNode           3m6s               node-controller  Node pause-179913 event: Registered Node pause-179913 in Controller
	  Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s (x5 over 74s)  kubelet          Node pause-179913 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x5 over 74s)  kubelet          Node pause-179913 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x5 over 74s)  kubelet          Node pause-179913 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15s                node-controller  Node pause-179913 event: Registered Node pause-179913 in Controller
	
	
	==> dmesg <==
	[Dec10 07:01] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Dec10 07:02] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000433] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.211962] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000023] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.101205] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.112366] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.131800] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.149171] kauditd_printk_skb: 172 callbacks suppressed
	[  +0.815709] kauditd_printk_skb: 12 callbacks suppressed
	[Dec10 07:03] kauditd_printk_skb: 224 callbacks suppressed
	[Dec10 07:04] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.959488] kauditd_printk_skb: 335 callbacks suppressed
	[ +10.801140] kauditd_printk_skb: 200 callbacks suppressed
	[  +5.462135] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.805196] kauditd_printk_skb: 22 callbacks suppressed
	[ +10.719004] kauditd_printk_skb: 22 callbacks suppressed
	[Dec10 07:05] kauditd_printk_skb: 36 callbacks suppressed
	[  +3.042682] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [2efb4548b8c28445c05d98cdf3122b7a7acb919a34b0832806fc3bd6eafbd2b6] <==
	{"level":"warn","ts":"2025-12-10T07:05:41.836239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.847700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.860260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.872741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.883956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.896621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.910714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.923309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.934382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.948961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.961371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.972134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.988706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.998930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.016246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.034874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.043879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.054444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.070949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.101838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.106675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.125505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.140745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.191632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46374","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T07:05:54.876422Z","caller":"traceutil/trace.go:172","msg":"trace[12156952] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"187.257891ms","start":"2025-12-10T07:05:54.689150Z","end":"2025-12-10T07:05:54.876408Z","steps":["trace[12156952] 'process raft request'  (duration: 186.624007ms)"],"step_count":1}
	
	
	==> etcd [874494df45e73978d6466ad43ce8a528791399e10087dc8d328fa73f04738b76] <==
	{"level":"warn","ts":"2025-12-10T07:04:36.811150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:04:36.817438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:04:36.827807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:04:36.843333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:04:36.867831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:04:36.879668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:04:36.967271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35574","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T07:04:37.366824Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-10T07:04:37.367018Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-179913","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.172:2380"],"advertise-client-urls":["https://192.168.83.172:2379"]}
	{"level":"error","ts":"2025-12-10T07:04:37.367323Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T07:04:44.370189Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T07:04:44.370252Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T07:04:44.370274Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"dbf03bba59342af","current-leader-member-id":"dbf03bba59342af"}
	{"level":"info","ts":"2025-12-10T07:04:44.370415Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-10T07:04:44.370435Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-10T07:04:44.375088Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.172:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T07:04:44.375262Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.172:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T07:04:44.375283Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.172:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-10T07:04:44.375339Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T07:04:44.375370Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T07:04:44.375387Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T07:04:44.378326Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.172:2380"}
	{"level":"error","ts":"2025-12-10T07:04:44.378500Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.172:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T07:04:44.378615Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.172:2380"}
	{"level":"info","ts":"2025-12-10T07:04:44.378684Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-179913","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.172:2380"],"advertise-client-urls":["https://192.168.83.172:2379"]}
	
	
	==> kernel <==
	 07:06:01 up 4 min,  0 users,  load average: 0.26, 0.31, 0.14
	Linux pause-179913 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0431ea1f78207f3fe3a3d98e4cf18589c6a270226e73b99835c88ee5df60b745] <==
	W1210 07:05:22.871120       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:22.931734       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.059860       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.131889       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.182739       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.194772       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.422100       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.854119       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.922664       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-12-10T07:05:24.103344Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	W1210 07:05:24.617321       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:25.694649       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-12-10T07:05:26.110718Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-12-10T07:05:28.117978Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-12-10T07:05:28.787308Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00226da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1210 07:05:28.787498       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1210 07:05:28.787774       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-179913?timeout=10s" auditID="24a34f72-c7bc-4a60-8d41-d2f85e33ab4e"
	E1210 07:05:28.787866       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="67.3µs" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-179913" result=null
	W1210 07:05:28.856988       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-12-10T07:05:30.125006Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-12-10T07:05:32.133771Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-12-10T07:05:34.140814Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	W1210 07:05:35.988993       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-12-10T07:05:36.147253Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	F1210 07:05:37.656658       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [492bd86865d3bf6e3fd652b744da85dc827039833cf89b2d1c37c5c06e452d94] <==
	I1210 07:05:43.193276       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 07:05:43.193359       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 07:05:43.193655       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 07:05:43.194829       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 07:05:43.194862       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 07:05:43.195511       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1210 07:05:43.195680       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 07:05:43.212371       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 07:05:43.220343       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 07:05:43.226647       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1210 07:05:43.226718       1 policy_source.go:240] refreshing policies
	I1210 07:05:43.227160       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 07:05:43.232188       1 aggregator.go:171] initial CRD sync complete...
	I1210 07:05:43.232231       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 07:05:43.232240       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 07:05:43.232248       1 cache.go:39] Caches are synced for autoregister controller
	I1210 07:05:43.234887       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 07:05:43.254671       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 07:05:43.906828       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 07:05:45.331665       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 07:05:45.423144       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 07:05:45.484438       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 07:05:45.503231       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 07:05:46.656592       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 07:05:46.756355       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [93385a5ea66dfb22d59cb4438f95b57031a2904e9201d0c5593ddb98cc2ff788] <==
	I1210 07:04:42.201669       1 serving.go:386] Generated self-signed cert in-memory
	I1210 07:04:42.978025       1 controllermanager.go:191] "Starting" version="v1.34.3"
	I1210 07:04:42.978080       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 07:04:42.980383       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1210 07:04:42.980651       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1210 07:04:42.981018       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1210 07:04:42.981084       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 07:04:57.000638       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[-]poststarthook/bootstrap-controller failed: reason withheld\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealth
z check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [9d0fef2ee087eaf1645a535bd5e565cb7e4276e629e2a92bd310b9c565f388e7] <==
	I1210 07:05:46.549650       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 07:05:46.550604       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 07:05:46.550862       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 07:05:46.550911       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 07:05:46.551013       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 07:05:46.552641       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 07:05:46.552729       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 07:05:46.552783       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 07:05:46.553752       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 07:05:46.553845       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 07:05:46.557837       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 07:05:46.557878       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 07:05:46.557887       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 07:05:46.558879       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 07:05:46.558980       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 07:05:46.559231       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 07:05:46.559771       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 07:05:46.564076       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 07:05:46.564585       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 07:05:46.569514       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 07:05:46.571329       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 07:05:46.578277       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 07:05:46.582224       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 07:05:46.589469       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 07:05:46.590892       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8d319ed6655edf622e672cad35dae95b38725e61f86172552ed36c899e5f0b86] <==
	I1210 07:04:21.493379       1 server_linux.go:53] "Using iptables proxy"
	I1210 07:04:21.735168       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1210 07:04:21.740919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-179913&limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:04:22.854469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-179913&limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:04:25.141630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-179913&limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:04:28.761637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-179913&limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [d3f82629cddb72732962082ac5437805942f88e41649e039cb4d0c94e5fe9869] <==
	I1210 07:05:44.028069       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 07:05:44.130650       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 07:05:44.130702       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.172"]
	E1210 07:05:44.130784       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 07:05:44.227305       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 07:05:44.227806       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 07:05:44.228047       1 server_linux.go:132] "Using iptables Proxier"
	I1210 07:05:44.256252       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 07:05:44.256818       1 server.go:527] "Version info" version="v1.34.3"
	I1210 07:05:44.257405       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 07:05:44.262856       1 config.go:200] "Starting service config controller"
	I1210 07:05:44.273685       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 07:05:44.263974       1 config.go:106] "Starting endpoint slice config controller"
	I1210 07:05:44.273866       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 07:05:44.263990       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 07:05:44.273921       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 07:05:44.271189       1 config.go:309] "Starting node config controller"
	I1210 07:05:44.273973       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 07:05:44.273994       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 07:05:44.373882       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 07:05:44.373987       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 07:05:44.374002       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1edd69d2810bce69fcfa37e6d911b6ef0ca221e589da9070b85d1fe9098e2ccf] <==
	
	
	==> kube-scheduler [2d6458c094802a443cc98268f45c071f7ee251f8ddef7849f9d31a4d3bf71726] <==
	E1210 07:05:40.137680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.83.172:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 07:05:40.159830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.83.172:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 07:05:40.160178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.83.172:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 07:05:40.211034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.83.172:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 07:05:40.236710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.83.172:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 07:05:40.308884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.83.172:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 07:05:43.088621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 07:05:43.090812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 07:05:43.090925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:05:43.091002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 07:05:43.091085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 07:05:43.091209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 07:05:43.091292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 07:05:43.091383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 07:05:43.091473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 07:05:43.091618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 07:05:43.091718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 07:05:43.091800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 07:05:43.091880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 07:05:43.091937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 07:05:43.091989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 07:05:43.096001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 07:05:43.160984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 07:05:43.164460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1210 07:05:48.938893       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.059315    5213 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-179913\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.111752    5213 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-179913\" is forbidden: User \"system:node:pause-179913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" podUID="3720f22a5a575d6f6cca10f29e0903b1" pod="kube-system/kube-apiserver-pause-179913"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.117724    5213 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-179913\" is forbidden: User \"system:node:pause-179913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" podUID="938741addc56bf029154c923341405b9" pod="kube-system/kube-controller-manager-pause-179913"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.118730    5213 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-nnm25\" is forbidden: User \"system:node:pause-179913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" podUID="e290831e-e710-4fc6-9170-9661176ac06f" pod="kube-system/coredns-66bc5c9577-nnm25"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.119619    5213 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-rvmnw\" is forbidden: User \"system:node:pause-179913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" podUID="9d3714c1-16f4-4178-9896-49713556e897" pod="kube-system/kube-proxy-rvmnw"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.121193    5213 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-qcwf4\" is forbidden: User \"system:node:pause-179913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" podUID="3c3003a3-50a1-4248-a19f-458f0c14923b" pod="kube-system/coredns-66bc5c9577-qcwf4"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.130287    5213 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-179913\" is forbidden: User \"system:node:pause-179913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" podUID="d5c54887e4f2545a88be76f1664442e1" pod="kube-system/etcd-pause-179913"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.156632    5213 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 10 07:05:43 pause-179913 kubelet[5213]:         pods "kube-controller-manager-pause-179913" is forbidden: User "system:node:pause-179913" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-179913' and this object
	Dec 10 07:05:43 pause-179913 kubelet[5213]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found]
	Dec 10 07:05:43 pause-179913 kubelet[5213]:  > podUID="938741addc56bf029154c923341405b9" pod="kube-system/kube-controller-manager-pause-179913"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.295144    5213 kubelet_node_status.go:124] "Node was previously registered" node="pause-179913"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.297734    5213 kubelet_node_status.go:78] "Successfully registered node" node="pause-179913"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.297829    5213 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.301489    5213 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.545619    5213 scope.go:117] "RemoveContainer" containerID="f9ae0c7983bb05aad73287267b4fe1dec8f7753c19691f036ae5eb30c9a53e71"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.546057    5213 scope.go:117] "RemoveContainer" containerID="8d319ed6655edf622e672cad35dae95b38725e61f86172552ed36c899e5f0b86"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.548677    5213 scope.go:117] "RemoveContainer" containerID="7d19e592c4f7ec5a3e5ebe3d5a7146ad75c9185238096e24895d73cb4cecea08"
	Dec 10 07:05:47 pause-179913 kubelet[5213]: E1210 07:05:47.558818    5213 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod938741addc56bf029154c923341405b9/crio-53fb4db082b1db50deb3198bc01f9fb42592dbd03fe186bf9f389717bdfb34eb: Error finding container 53fb4db082b1db50deb3198bc01f9fb42592dbd03fe186bf9f389717bdfb34eb: Status 404 returned error can't find the container with id 53fb4db082b1db50deb3198bc01f9fb42592dbd03fe186bf9f389717bdfb34eb
	Dec 10 07:05:47 pause-179913 kubelet[5213]: E1210 07:05:47.559965    5213 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod3720f22a5a575d6f6cca10f29e0903b1/crio-4d9848fb765d56839bda736bb41de9e5390520aa893855d7c2433b1ebbe85346: Error finding container 4d9848fb765d56839bda736bb41de9e5390520aa893855d7c2433b1ebbe85346: Status 404 returned error can't find the container with id 4d9848fb765d56839bda736bb41de9e5390520aa893855d7c2433b1ebbe85346
	Dec 10 07:05:47 pause-179913 kubelet[5213]: E1210 07:05:47.560606    5213 manager.go:1116] Failed to create existing container: /kubepods/burstable/podb47cf332bf25146df9a84a50003ecfff/crio-de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305: Error finding container de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305: Status 404 returned error can't find the container with id de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305
	Dec 10 07:05:47 pause-179913 kubelet[5213]: E1210 07:05:47.601688    5213 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765350347601261866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:94590}  inodes_used:{value:43}}"
	Dec 10 07:05:47 pause-179913 kubelet[5213]: E1210 07:05:47.601731    5213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765350347601261866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:94590}  inodes_used:{value:43}}"
	Dec 10 07:05:57 pause-179913 kubelet[5213]: E1210 07:05:57.604306    5213 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765350357603648681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:94590}  inodes_used:{value:43}}"
	Dec 10 07:05:57 pause-179913 kubelet[5213]: E1210 07:05:57.605126    5213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765350357603648681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:94590}  inodes_used:{value:43}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-179913 -n pause-179913
helpers_test.go:270: (dbg) Run:  kubectl --context pause-179913 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-179913 -n pause-179913
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-179913 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-179913 logs -n 25: (1.699262856s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────────────
──┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────────────
──┤
	│ ssh     │ -p cilium-714139 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo containerd config dump                                                                                                                                                                                                │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ ssh     │ -p cilium-714139 sudo crio config                                                                                                                                                                                                           │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │                     │
	│ delete  │ -p cilium-714139                                                                                                                                                                                                                            │ cilium-714139             │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │ 10 Dec 25 07:02 UTC │
	│ start   │ -p guest-539425 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                                     │ guest-539425              │ jenkins │ v1.37.0 │ 10 Dec 25 07:02 UTC │ 10 Dec 25 07:03 UTC │
	│ start   │ -p cert-expiration-198346 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                                        │ cert-expiration-198346    │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │ 10 Dec 25 07:04 UTC │
	│ delete  │ -p force-systemd-env-909953                                                                                                                                                                                                                 │ force-systemd-env-909953  │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │ 10 Dec 25 07:03 UTC │
	│ start   │ -p force-systemd-flag-302211 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                   │ force-systemd-flag-302211 │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │ 10 Dec 25 07:04 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-511706 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                 │ running-upgrade-511706    │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │                     │
	│ start   │ -p pause-179913 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-179913              │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │ 10 Dec 25 07:05 UTC │
	│ delete  │ -p running-upgrade-511706                                                                                                                                                                                                                   │ running-upgrade-511706    │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │ 10 Dec 25 07:03 UTC │
	│ start   │ -p cert-options-977501 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-977501       │ jenkins │ v1.37.0 │ 10 Dec 25 07:03 UTC │ 10 Dec 25 07:05 UTC │
	│ ssh     │ force-systemd-flag-302211 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                        │ force-systemd-flag-302211 │ jenkins │ v1.37.0 │ 10 Dec 25 07:04 UTC │ 10 Dec 25 07:04 UTC │
	│ delete  │ -p force-systemd-flag-302211                                                                                                                                                                                                                │ force-systemd-flag-302211 │ jenkins │ v1.37.0 │ 10 Dec 25 07:04 UTC │ 10 Dec 25 07:04 UTC │
	│ start   │ -p old-k8s-version-508835 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-508835    │ jenkins │ v1.37.0 │ 10 Dec 25 07:04 UTC │                     │
	│ ssh     │ cert-options-977501 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-977501       │ jenkins │ v1.37.0 │ 10 Dec 25 07:05 UTC │ 10 Dec 25 07:05 UTC │
	│ ssh     │ -p cert-options-977501 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-977501       │ jenkins │ v1.37.0 │ 10 Dec 25 07:05 UTC │ 10 Dec 25 07:05 UTC │
	│ delete  │ -p cert-options-977501                                                                                                                                                                                                                      │ cert-options-977501       │ jenkins │ v1.37.0 │ 10 Dec 25 07:05 UTC │ 10 Dec 25 07:05 UTC │
	│ start   │ -p no-preload-548860 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-548860         │ jenkins │ v1.37.0 │ 10 Dec 25 07:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────────────
──┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:05:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:05:16.206314  294110 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:05:16.206578  294110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:05:16.206587  294110 out.go:374] Setting ErrFile to fd 2...
	I1210 07:05:16.206591  294110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:05:16.206801  294110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 07:05:16.207315  294110 out.go:368] Setting JSON to false
	I1210 07:05:16.208514  294110 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31663,"bootTime":1765318653,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 07:05:16.208614  294110 start.go:143] virtualization: kvm guest
	I1210 07:05:16.210825  294110 out.go:179] * [no-preload-548860] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 07:05:16.212276  294110 notify.go:221] Checking for updates...
	I1210 07:05:16.212297  294110 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:05:16.214009  294110 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:05:16.215560  294110 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 07:05:16.217267  294110 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 07:05:16.218913  294110 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 07:05:16.220546  294110 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:05:16.222559  294110 config.go:182] Loaded profile config "cert-expiration-198346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:05:16.222726  294110 config.go:182] Loaded profile config "guest-539425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1210 07:05:16.222962  294110 config.go:182] Loaded profile config "old-k8s-version-508835": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 07:05:16.223227  294110 config.go:182] Loaded profile config "pause-179913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:05:16.223392  294110 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:05:16.270545  294110 out.go:179] * Using the kvm2 driver based on user configuration
	I1210 07:05:16.272051  294110 start.go:309] selected driver: kvm2
	I1210 07:05:16.272070  294110 start.go:927] validating driver "kvm2" against <nil>
	I1210 07:05:16.272085  294110 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:05:16.272836  294110 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:05:16.273115  294110 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:05:16.273144  294110 cni.go:84] Creating CNI manager for ""
	I1210 07:05:16.273205  294110 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 07:05:16.273220  294110 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 07:05:16.273298  294110 start.go:353] cluster config:
	{Name:no-preload-548860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-548860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:05:16.273429  294110 iso.go:125] acquiring lock: {Name:mkd598cf63ca899d26ff5ae5308f8a58215a80b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.275396  294110 out.go:179] * Starting "no-preload-548860" primary control-plane node in "no-preload-548860" cluster
	I1210 07:05:16.276945  294110 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 07:05:16.277096  294110 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/config.json ...
	I1210 07:05:16.277139  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/config.json: {Name:mka274ecd1a9c088a679326196f01e5af9e1ec92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:16.277331  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:16.277365  294110 start.go:360] acquireMachinesLock for no-preload-548860: {Name:mk2161deb194f56aae2b0559c12fd0eb56fd317d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 07:05:16.277431  294110 start.go:364] duration metric: took 30.345µs to acquireMachinesLock for "no-preload-548860"
	I1210 07:05:16.277461  294110 start.go:93] Provisioning new machine with config: &{Name:no-preload-548860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-548860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:05:16.277541  294110 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 07:05:14.066553  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:14.066601  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:14.066627  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:16.074470  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:16.074514  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:16.074539  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:18.083725  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:18.083764  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:18.083793  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:16.808679  293764 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256657975s)
	I1210 07:05:16.808705  293764 crio.go:469] duration metric: took 2.256782266s to extract the tarball
	I1210 07:05:16.808714  293764 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 07:05:16.863654  293764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:05:16.915842  293764 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:05:16.915865  293764 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:05:16.915888  293764 kubeadm.go:935] updating node { 192.168.50.231 8443 v1.28.0 crio true true} ...
	I1210 07:05:16.915985  293764 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-508835 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-508835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:05:16.916062  293764 ssh_runner.go:195] Run: crio config
	I1210 07:05:16.971131  293764 cni.go:84] Creating CNI manager for ""
	I1210 07:05:16.971159  293764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 07:05:16.971185  293764 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:05:16.971227  293764 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.231 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-508835 NodeName:old-k8s-version-508835 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:05:16.971431  293764 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-508835"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:05:16.971514  293764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1210 07:05:16.988370  293764 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:05:16.988454  293764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:05:17.003395  293764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1210 07:05:17.027477  293764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:05:17.055549  293764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1210 07:05:17.082668  293764 ssh_runner.go:195] Run: grep 192.168.50.231	control-plane.minikube.internal$ /etc/hosts
	I1210 07:05:17.088894  293764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:05:17.108231  293764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:05:17.292912  293764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:05:17.336862  293764 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835 for IP: 192.168.50.231
	I1210 07:05:17.336898  293764 certs.go:195] generating shared ca certs ...
	I1210 07:05:17.336919  293764 certs.go:227] acquiring lock for ca certs: {Name:mk2c8c8bbc628186be8cfd9c613269482a34a061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.337120  293764 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key
	I1210 07:05:17.337182  293764 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key
	I1210 07:05:17.337197  293764 certs.go:257] generating profile certs ...
	I1210 07:05:17.337278  293764 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.key
	I1210 07:05:17.337312  293764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt with IP's: []
	I1210 07:05:17.397490  293764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt ...
	I1210 07:05:17.397522  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: {Name:mkfdd1331d0f05540e260f5cb03882408a7eed76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.397727  293764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.key ...
	I1210 07:05:17.397745  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.key: {Name:mk042496ce578626a444eeea6e0812d38d4d73dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.397868  293764 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.key.7cf19092
	I1210 07:05:17.397911  293764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.crt.7cf19092 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.231]
	I1210 07:05:17.578994  293764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.crt.7cf19092 ...
	I1210 07:05:17.579025  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.crt.7cf19092: {Name:mk3307c8ded08808c71b9d2a1a3f81e34a37cc0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.579206  293764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.key.7cf19092 ...
	I1210 07:05:17.579224  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.key.7cf19092: {Name:mk25ebc3af9d5f8d9057125864abfb2e61fd787b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.579333  293764 certs.go:382] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.crt.7cf19092 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.crt
	I1210 07:05:17.579422  293764 certs.go:386] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.key.7cf19092 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.key
	I1210 07:05:17.579505  293764 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.key
	I1210 07:05:17.579530  293764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.crt with IP's: []
	I1210 07:05:17.628159  293764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.crt ...
	I1210 07:05:17.628190  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.crt: {Name:mkae654fb4308c8a05a021a33322a845a1288052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.628379  293764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.key ...
	I1210 07:05:17.628411  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.key: {Name:mk602569a094af04941e3424c21f68e5bf6eb2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:17.628633  293764 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366.pem (1338 bytes)
	W1210 07:05:17.628688  293764 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366_empty.pem, impossibly tiny 0 bytes
	I1210 07:05:17.628705  293764 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:05:17.628743  293764 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:05:17.628783  293764 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:05:17.628819  293764 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem (1675 bytes)
	I1210 07:05:17.628892  293764 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem (1708 bytes)
	I1210 07:05:17.629565  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:05:17.673120  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:05:17.708293  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:05:17.747930  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:05:17.789028  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 07:05:17.833224  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:05:17.869711  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:05:17.904498  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:05:17.943073  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem --> /usr/share/ca-certificates/2473662.pem (1708 bytes)
	I1210 07:05:17.992682  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:05:18.033787  293764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366.pem --> /usr/share/ca-certificates/247366.pem (1338 bytes)
	I1210 07:05:18.088542  293764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:05:18.134417  293764 ssh_runner.go:195] Run: openssl version
	I1210 07:05:18.145761  293764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2473662.pem
	I1210 07:05:18.166082  293764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2473662.pem /etc/ssl/certs/2473662.pem
	I1210 07:05:18.185359  293764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2473662.pem
	I1210 07:05:18.193258  293764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:59 /usr/share/ca-certificates/2473662.pem
	I1210 07:05:18.193334  293764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2473662.pem
	I1210 07:05:18.205065  293764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:05:18.222449  293764 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2473662.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:05:18.245985  293764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:18.265317  293764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:05:18.283092  293764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:18.291584  293764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:18.291670  293764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:18.301559  293764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:05:18.321136  293764 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:05:18.338572  293764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/247366.pem
	I1210 07:05:18.355927  293764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/247366.pem /etc/ssl/certs/247366.pem
	I1210 07:05:18.373116  293764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247366.pem
	I1210 07:05:18.381640  293764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:59 /usr/share/ca-certificates/247366.pem
	I1210 07:05:18.381729  293764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247366.pem
	I1210 07:05:18.398000  293764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:05:18.414669  293764 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/247366.pem /etc/ssl/certs/51391683.0
	I1210 07:05:18.433027  293764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:05:18.442915  293764 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:05:18.442986  293764 kubeadm.go:401] StartCluster: {Name:old-k8s-version-508835 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.0 ClusterName:old-k8s-version-508835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.231 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:05:18.443095  293764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:05:18.443164  293764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:05:18.497505  293764 cri.go:89] found id: ""
	I1210 07:05:18.497596  293764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:05:18.512434  293764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:05:18.527144  293764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:05:18.542932  293764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:05:18.542952  293764 kubeadm.go:158] found existing configuration files:
	
	I1210 07:05:18.543005  293764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:05:18.555920  293764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:05:18.555982  293764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:05:18.571181  293764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:05:18.586661  293764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:05:18.586762  293764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:05:18.603660  293764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:05:18.618857  293764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:05:18.618982  293764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:05:18.634729  293764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:05:18.648027  293764 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:05:18.648104  293764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:05:18.661926  293764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 07:05:18.736922  293764 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1210 07:05:18.737021  293764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:05:18.962809  293764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:05:18.963026  293764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:05:18.963163  293764 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 07:05:19.181013  293764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:05:19.189016  293764 out.go:252]   - Generating certificates and keys ...
	I1210 07:05:19.189154  293764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:05:19.189251  293764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:05:19.386451  293764 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:05:19.621125  293764 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:05:19.791435  293764 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:05:16.279403  294110 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1210 07:05:16.279590  294110 start.go:159] libmachine.API.Create for "no-preload-548860" (driver="kvm2")
	I1210 07:05:16.279625  294110 client.go:173] LocalClient.Create starting
	I1210 07:05:16.279692  294110 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem
	I1210 07:05:16.279743  294110 main.go:143] libmachine: Decoding PEM data...
	I1210 07:05:16.279780  294110 main.go:143] libmachine: Parsing certificate...
	I1210 07:05:16.279846  294110 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem
	I1210 07:05:16.279916  294110 main.go:143] libmachine: Decoding PEM data...
	I1210 07:05:16.279937  294110 main.go:143] libmachine: Parsing certificate...
	I1210 07:05:16.280321  294110 main.go:143] libmachine: creating domain...
	I1210 07:05:16.280334  294110 main.go:143] libmachine: creating network...
	I1210 07:05:16.282108  294110 main.go:143] libmachine: found existing default network
	I1210 07:05:16.282425  294110 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 07:05:16.283573  294110 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:1f:09} reservation:<nil>}
	I1210 07:05:16.284920  294110 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:46:b2:1d} reservation:<nil>}
	I1210 07:05:16.285961  294110 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:72:7b:73} reservation:<nil>}
	I1210 07:05:16.287386  294110 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cad7d0}
	I1210 07:05:16.287520  294110 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-no-preload-548860</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 07:05:16.294751  294110 main.go:143] libmachine: creating private network mk-no-preload-548860 192.168.72.0/24...
	I1210 07:05:16.398048  294110 main.go:143] libmachine: private network mk-no-preload-548860 192.168.72.0/24 created
	I1210 07:05:16.398398  294110 main.go:143] libmachine: <network>
	  <name>mk-no-preload-548860</name>
	  <uuid>cf93ddd6-ad43-4377-b4c8-115a8ae19c44</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:21:ec:98'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1210 07:05:16.398443  294110 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860 ...
	I1210 07:05:16.398497  294110 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22094-243461/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1210 07:05:16.398511  294110 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 07:05:16.398590  294110 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22094-243461/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22094-243461/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1210 07:05:16.456159  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:16.627626  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:16.678007  294110 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/id_rsa...
	I1210 07:05:16.806580  294110 cache.go:107] acquiring lock: {Name:mk929cf02fc539c0a3ba415ba856603e7a2db9a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806629  294110 cache.go:107] acquiring lock: {Name:mke47ba0cdabea58510a295512b0c545824c6ac1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806636  294110 cache.go:107] acquiring lock: {Name:mk2eb8700e86825bc25abe1ba8e089f6daac20f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806586  294110 cache.go:107] acquiring lock: {Name:mk2d5c3355eb914434f77fe8a549e7e27d61d8ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806638  294110 cache.go:107] acquiring lock: {Name:mk5be695d928909e19606bdb32e31778a0102505 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806679  294110 cache.go:107] acquiring lock: {Name:mkf0a6fde87f6be7fe6d187eadd0116c7e80851c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806775  294110 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:05:16.806591  294110 cache.go:107] acquiring lock: {Name:mkb531d6d5f9ca51fa23c93b7dbb7d49c9f9871f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806793  294110 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 234.679µs
	I1210 07:05:16.806585  294110 cache.go:107] acquiring lock: {Name:mk8a2b7c7103ad9b74ce0f1af971a5d8da1c8f6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:05:16.806806  294110 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:05:16.806842  294110 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:16.806871  294110 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:16.806949  294110 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:16.807007  294110 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:16.807027  294110 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:16.807161  294110 cache.go:115] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:05:16.807171  294110 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 614.196µs
	I1210 07:05:16.807180  294110 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:05:16.807188  294110 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:16.808352  294110 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:16.808351  294110 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:16.808350  294110 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:16.808423  294110 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:16.808351  294110 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:16.809199  294110 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:16.847569  294110 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/no-preload-548860.rawdisk...
	I1210 07:05:16.847626  294110 main.go:143] libmachine: Writing magic tar header
	I1210 07:05:16.847656  294110 main.go:143] libmachine: Writing SSH key tar header
	I1210 07:05:16.847751  294110 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860 ...
	I1210 07:05:16.847836  294110 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860
	I1210 07:05:16.847870  294110 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860 (perms=drwx------)
	I1210 07:05:16.847907  294110 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube/machines
	I1210 07:05:16.847924  294110 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube/machines (perms=drwxr-xr-x)
	I1210 07:05:16.847941  294110 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 07:05:16.847954  294110 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461/.minikube (perms=drwxr-xr-x)
	I1210 07:05:16.847966  294110 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22094-243461
	I1210 07:05:16.847979  294110 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22094-243461 (perms=drwxrwxr-x)
	I1210 07:05:16.847992  294110 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1210 07:05:16.848012  294110 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 07:05:16.848026  294110 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1210 07:05:16.848037  294110 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 07:05:16.848053  294110 main.go:143] libmachine: checking permissions on dir: /home
	I1210 07:05:16.848067  294110 main.go:143] libmachine: skipping /home - not owner
	I1210 07:05:16.848076  294110 main.go:143] libmachine: defining domain...
	I1210 07:05:16.849697  294110 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>no-preload-548860</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/no-preload-548860.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-no-preload-548860'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1210 07:05:16.855284  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:30:24:f8 in network default
	I1210 07:05:16.855990  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:16.856011  294110 main.go:143] libmachine: starting domain...
	I1210 07:05:16.856015  294110 main.go:143] libmachine: ensuring networks are active...
	I1210 07:05:16.856813  294110 main.go:143] libmachine: Ensuring network default is active
	I1210 07:05:16.857186  294110 main.go:143] libmachine: Ensuring network mk-no-preload-548860 is active
	I1210 07:05:16.857756  294110 main.go:143] libmachine: getting domain XML...
	I1210 07:05:16.858729  294110 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>no-preload-548860</name>
	  <uuid>8982359d-2b52-47d5-8872-36df279de441</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/no-preload-548860.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:6b:d2:9f'/>
	      <source network='mk-no-preload-548860'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:30:24:f8'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1210 07:05:16.965549  294110 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 07:05:16.970732  294110 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 07:05:16.995224  294110 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 07:05:17.023669  294110 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 07:05:17.025984  294110 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1210 07:05:17.045842  294110 cache.go:162] opening:  /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 07:05:17.585762  294110 cache.go:157] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 07:05:17.585791  294110 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 779.154973ms
	I1210 07:05:17.585802  294110 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 07:05:18.273163  294110 cache.go:157] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:05:18.273195  294110 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.466560078s
	I1210 07:05:18.273209  294110 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:05:18.349462  294110 cache.go:157] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 07:05:18.349510  294110 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 1.542880705s
	I1210 07:05:18.349529  294110 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 07:05:18.425650  294110 cache.go:157] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 07:05:18.425683  294110 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 1.619112587s
	I1210 07:05:18.425699  294110 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 07:05:18.432235  294110 cache.go:157] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 07:05:18.432265  294110 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.625715165s
	I1210 07:05:18.432277  294110 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 07:05:18.461488  294110 cache.go:157] /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 07:05:18.461519  294110 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 1.654875909s
	I1210 07:05:18.461537  294110 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 07:05:18.461555  294110 cache.go:87] Successfully saved all images to host disk.
	I1210 07:05:18.895683  294110 main.go:143] libmachine: waiting for domain to start...
	I1210 07:05:18.897270  294110 main.go:143] libmachine: domain is now running
	I1210 07:05:18.897288  294110 main.go:143] libmachine: waiting for IP...
	I1210 07:05:18.898169  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:18.898970  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:18.898991  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:18.899531  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:18.899588  294110 retry.go:31] will retry after 261.109719ms: waiting for domain to come up
	I1210 07:05:19.162347  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:19.163226  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:19.163247  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:19.163714  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:19.163769  294110 retry.go:31] will retry after 255.06727ms: waiting for domain to come up
	I1210 07:05:19.420807  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:19.421772  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:19.421807  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:19.422359  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:19.422412  294110 retry.go:31] will retry after 384.138204ms: waiting for domain to come up
	I1210 07:05:19.808271  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:19.808950  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:19.808970  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:19.809384  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:19.809424  294110 retry.go:31] will retry after 559.345956ms: waiting for domain to come up
	I1210 07:05:20.370429  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:20.371329  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:20.371346  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:20.371852  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:20.371920  294110 retry.go:31] will retry after 697.632642ms: waiting for domain to come up
	I1210 07:05:21.071160  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:21.071941  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:21.071961  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:21.072379  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:21.072422  294110 retry.go:31] will retry after 652.843698ms: waiting for domain to come up
	I1210 07:05:20.084416  293764 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:05:20.312465  293764 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:05:20.312864  293764 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-508835] and IPs [192.168.50.231 127.0.0.1 ::1]
	I1210 07:05:20.622637  293764 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:05:20.622867  293764 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-508835] and IPs [192.168.50.231 127.0.0.1 ::1]
	I1210 07:05:20.814447  293764 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:05:21.253410  293764 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:05:21.548631  293764 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:05:21.549285  293764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:05:21.685611  293764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:05:21.828322  293764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:05:21.962352  293764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:05:22.281291  293764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:05:22.281762  293764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:05:22.284704  293764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:05:20.095051  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:20.095096  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:20.095123  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:22.102130  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:22.102168  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:22.102195  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:22.287443  293764 out.go:252]   - Booting up control plane ...
	I1210 07:05:22.287579  293764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:05:22.287681  293764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:05:22.287776  293764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:05:22.306297  293764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:05:22.307275  293764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:05:22.307367  293764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:05:22.500236  293764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 07:05:21.727180  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:21.727957  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:21.727984  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:21.728483  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:21.728533  294110 retry.go:31] will retry after 1.171673249s: waiting for domain to come up
	I1210 07:05:22.902321  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:22.903157  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:22.903183  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:22.903638  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:22.903682  294110 retry.go:31] will retry after 1.024090759s: waiting for domain to come up
	I1210 07:05:23.929407  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:23.930288  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:23.930321  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:23.930735  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:23.930781  294110 retry.go:31] will retry after 1.64218528s: waiting for domain to come up
	I1210 07:05:25.574577  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:25.575352  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:25.575370  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:25.575900  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:25.575947  294110 retry.go:31] will retry after 2.176104411s: waiting for domain to come up
	I1210 07:05:24.109943  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:24.109984  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:24.110021  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:26.117316  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:26.117358  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:26.117384  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:28.124715  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:28.124754  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:28.124776  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:29.000968  293764 kubeadm.go:319] [apiclient] All control plane components are healthy after 6.504795 seconds
	I1210 07:05:29.001139  293764 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:05:29.020007  293764 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:05:29.557546  293764 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:05:29.557852  293764 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-508835 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:05:30.077738  293764 kubeadm.go:319] [bootstrap-token] Using token: ul6r04.zb9s25jrgemvvhyb
	I1210 07:05:30.079471  293764 out.go:252]   - Configuring RBAC rules ...
	I1210 07:05:30.079678  293764 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:05:30.089980  293764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:05:30.106937  293764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:05:30.117459  293764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:05:30.124786  293764 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:05:30.128962  293764 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:05:30.155466  293764 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:05:30.510419  293764 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:05:30.577855  293764 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:05:30.581050  293764 kubeadm.go:319] 
	I1210 07:05:30.581150  293764 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:05:30.581157  293764 kubeadm.go:319] 
	I1210 07:05:30.581266  293764 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:05:30.581317  293764 kubeadm.go:319] 
	I1210 07:05:30.581376  293764 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:05:30.581475  293764 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:05:30.581571  293764 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:05:30.581582  293764 kubeadm.go:319] 
	I1210 07:05:30.581692  293764 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:05:30.581718  293764 kubeadm.go:319] 
	I1210 07:05:30.581800  293764 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:05:30.581832  293764 kubeadm.go:319] 
	I1210 07:05:30.581942  293764 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:05:30.582055  293764 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:05:30.582163  293764 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:05:30.582173  293764 kubeadm.go:319] 
	I1210 07:05:30.582320  293764 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:05:30.582435  293764 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:05:30.582446  293764 kubeadm.go:319] 
	I1210 07:05:30.582555  293764 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ul6r04.zb9s25jrgemvvhyb \
	I1210 07:05:30.582686  293764 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:3fdb6ac7c26d35942b0591aa9d148ccb3b5b969098be653ce61486bf954c5d36 \
	I1210 07:05:30.582716  293764 kubeadm.go:319] 	--control-plane 
	I1210 07:05:30.582721  293764 kubeadm.go:319] 
	I1210 07:05:30.582826  293764 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:05:30.582834  293764 kubeadm.go:319] 
	I1210 07:05:30.583018  293764 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ul6r04.zb9s25jrgemvvhyb \
	I1210 07:05:30.583153  293764 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:3fdb6ac7c26d35942b0591aa9d148ccb3b5b969098be653ce61486bf954c5d36 
	I1210 07:05:30.586305  293764 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:05:30.586343  293764 cni.go:84] Creating CNI manager for ""
	I1210 07:05:30.586353  293764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 07:05:30.588272  293764 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 07:05:27.754268  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:27.755161  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:27.755206  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:27.755672  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:27.755721  294110 retry.go:31] will retry after 1.809505749s: waiting for domain to come up
	I1210 07:05:29.567691  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:29.568456  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:29.568472  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:29.568951  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:29.568991  294110 retry.go:31] will retry after 2.58197786s: waiting for domain to come up
	I1210 07:05:30.132497  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:30.132543  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:30.132570  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:32.140647  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:32.140680  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:32.140704  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:30.589945  293764 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 07:05:30.622758  293764 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 07:05:30.721185  293764 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:05:30.721352  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:30.721363  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-508835 minikube.k8s.io/updated_at=2025_12_10T07_05_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=old-k8s-version-508835 minikube.k8s.io/primary=true
	I1210 07:05:30.843717  293764 ops.go:34] apiserver oom_adj: -16
	I1210 07:05:30.996234  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:31.497128  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:31.996377  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:32.496973  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:32.996691  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:33.497137  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:33.997045  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:34.497323  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:32.152841  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:32.153546  294110 main.go:143] libmachine: no network interface addresses found for domain no-preload-548860 (source=lease)
	I1210 07:05:32.153563  294110 main.go:143] libmachine: trying to list again with source=arp
	I1210 07:05:32.154058  294110 main.go:143] libmachine: unable to find current IP address of domain no-preload-548860 in network mk-no-preload-548860 (interfaces detected: [])
	I1210 07:05:32.154099  294110 retry.go:31] will retry after 3.614054475s: waiting for domain to come up
	I1210 07:05:35.769329  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:35.770129  294110 main.go:143] libmachine: domain no-preload-548860 has current primary IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:35.770148  294110 main.go:143] libmachine: found domain IP: 192.168.72.64
	I1210 07:05:35.770157  294110 main.go:143] libmachine: reserving static IP address...
	I1210 07:05:35.770598  294110 main.go:143] libmachine: unable to find host DHCP lease matching {name: "no-preload-548860", mac: "52:54:00:6b:d2:9f", ip: "192.168.72.64"} in network mk-no-preload-548860
	I1210 07:05:36.064219  294110 main.go:143] libmachine: reserved static IP address 192.168.72.64 for domain no-preload-548860
	I1210 07:05:36.064255  294110 main.go:143] libmachine: waiting for SSH...
	I1210 07:05:36.064264  294110 main.go:143] libmachine: Getting to WaitForSSH function...
	I1210 07:05:36.068846  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.069443  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.069481  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.069761  294110 main.go:143] libmachine: Using SSH client type: native
	I1210 07:05:36.070242  294110 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I1210 07:05:36.070262  294110 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1210 07:05:36.194127  294110 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:05:36.194505  294110 main.go:143] libmachine: domain creation complete
	I1210 07:05:36.196345  294110 machine.go:94] provisionDockerMachine start ...
	I1210 07:05:36.199346  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.199849  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.199901  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.200133  294110 main.go:143] libmachine: Using SSH client type: native
	I1210 07:05:36.200358  294110 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I1210 07:05:36.200373  294110 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:05:34.147765  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:34.147863  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:34.147937  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:36.154382  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:36.154428  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:36.154448  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:37.822940  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": EOF
	I1210 07:05:37.823010  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:37.830354  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": read tcp 192.168.83.1:53560->192.168.83.172:8443: read: connection reset by peer
	I1210 07:05:37.998713  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:37.999558  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": dial tcp 192.168.83.172:8443: connect: connection refused
	I1210 07:05:36.320325  294110 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 07:05:36.320358  294110 buildroot.go:166] provisioning hostname "no-preload-548860"
	I1210 07:05:36.323439  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.323998  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.324042  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.324294  294110 main.go:143] libmachine: Using SSH client type: native
	I1210 07:05:36.324621  294110 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I1210 07:05:36.324638  294110 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-548860 && echo "no-preload-548860" | sudo tee /etc/hostname
	I1210 07:05:36.467740  294110 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-548860
	
	I1210 07:05:36.471153  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.471580  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.471611  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.471836  294110 main.go:143] libmachine: Using SSH client type: native
	I1210 07:05:36.472095  294110 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I1210 07:05:36.472118  294110 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-548860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-548860/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-548860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:05:36.611324  294110 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:05:36.611366  294110 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22094-243461/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-243461/.minikube}
	I1210 07:05:36.611401  294110 buildroot.go:174] setting up certificates
	I1210 07:05:36.611417  294110 provision.go:84] configureAuth start
	I1210 07:05:36.614604  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.615203  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.615233  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.618501  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.618944  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.618970  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.619115  294110 provision.go:143] copyHostCerts
	I1210 07:05:36.619197  294110 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-243461/.minikube/ca.pem, removing ...
	I1210 07:05:36.619211  294110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-243461/.minikube/ca.pem
	I1210 07:05:36.619300  294110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/ca.pem (1082 bytes)
	I1210 07:05:36.619438  294110 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-243461/.minikube/cert.pem, removing ...
	I1210 07:05:36.619454  294110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-243461/.minikube/cert.pem
	I1210 07:05:36.619498  294110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/cert.pem (1123 bytes)
	I1210 07:05:36.619560  294110 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-243461/.minikube/key.pem, removing ...
	I1210 07:05:36.619567  294110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-243461/.minikube/key.pem
	I1210 07:05:36.619592  294110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-243461/.minikube/key.pem (1675 bytes)
	I1210 07:05:36.619648  294110 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem org=jenkins.no-preload-548860 san=[127.0.0.1 192.168.72.64 localhost minikube no-preload-548860]
	I1210 07:05:36.760391  294110 provision.go:177] copyRemoteCerts
	I1210 07:05:36.760469  294110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:05:36.763680  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.764184  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.764213  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.764397  294110 sshutil.go:53] new ssh client: &{IP:192.168.72.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/id_rsa Username:docker}
	I1210 07:05:36.856922  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:05:36.893044  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:05:36.928738  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:05:36.964204  294110 provision.go:87] duration metric: took 352.757093ms to configureAuth
	I1210 07:05:36.964237  294110 buildroot.go:189] setting minikube options for container-runtime
	I1210 07:05:36.964430  294110 config.go:182] Loaded profile config "no-preload-548860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 07:05:36.967548  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.968165  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:36.968208  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:36.968503  294110 main.go:143] libmachine: Using SSH client type: native
	I1210 07:05:36.968758  294110 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I1210 07:05:36.968785  294110 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:05:37.251523  294110 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:05:37.251565  294110 machine.go:97] duration metric: took 1.055200039s to provisionDockerMachine
	I1210 07:05:37.251578  294110 client.go:176] duration metric: took 20.971943282s to LocalClient.Create
	I1210 07:05:37.251598  294110 start.go:167] duration metric: took 20.972007767s to libmachine.API.Create "no-preload-548860"
	I1210 07:05:37.251608  294110 start.go:293] postStartSetup for "no-preload-548860" (driver="kvm2")
	I1210 07:05:37.251624  294110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:05:37.251696  294110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:05:37.255235  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.255793  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:37.255824  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.256086  294110 sshutil.go:53] new ssh client: &{IP:192.168.72.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/id_rsa Username:docker}
	I1210 07:05:37.351083  294110 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:05:37.357031  294110 info.go:137] Remote host: Buildroot 2025.02
	I1210 07:05:37.357064  294110 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-243461/.minikube/addons for local assets ...
	I1210 07:05:37.357161  294110 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-243461/.minikube/files for local assets ...
	I1210 07:05:37.357284  294110 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem -> 2473662.pem in /etc/ssl/certs
	I1210 07:05:37.357413  294110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:05:37.373661  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem --> /etc/ssl/certs/2473662.pem (1708 bytes)
	I1210 07:05:37.405354  294110 start.go:296] duration metric: took 153.724982ms for postStartSetup
	I1210 07:05:37.409794  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.410400  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:37.410429  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.410756  294110 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/config.json ...
	I1210 07:05:37.411066  294110 start.go:128] duration metric: took 21.133509001s to createHost
	I1210 07:05:37.417978  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.418703  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:37.418741  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.419052  294110 main.go:143] libmachine: Using SSH client type: native
	I1210 07:05:37.419288  294110 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.64 22 <nil> <nil>}
	I1210 07:05:37.419305  294110 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1210 07:05:37.541300  294110 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765350337.501184110
	
	I1210 07:05:37.541335  294110 fix.go:216] guest clock: 1765350337.501184110
	I1210 07:05:37.541347  294110 fix.go:229] Guest: 2025-12-10 07:05:37.50118411 +0000 UTC Remote: 2025-12-10 07:05:37.411081074 +0000 UTC m=+21.271682045 (delta=90.103036ms)
	I1210 07:05:37.541372  294110 fix.go:200] guest clock delta is within tolerance: 90.103036ms
	I1210 07:05:37.541380  294110 start.go:83] releasing machines lock for "no-preload-548860", held for 21.263935615s
	I1210 07:05:37.544476  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.544857  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:37.544898  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.545575  294110 ssh_runner.go:195] Run: cat /version.json
	I1210 07:05:37.545690  294110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:05:37.549192  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.549200  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.549795  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:37.549824  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.549982  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:37.550024  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:37.550030  294110 sshutil.go:53] new ssh client: &{IP:192.168.72.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/id_rsa Username:docker}
	I1210 07:05:37.550253  294110 sshutil.go:53] new ssh client: &{IP:192.168.72.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/no-preload-548860/id_rsa Username:docker}
	I1210 07:05:37.662325  294110 ssh_runner.go:195] Run: systemctl --version
	I1210 07:05:37.669309  294110 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:05:37.841819  294110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:05:37.850301  294110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:05:37.850398  294110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:05:37.874908  294110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:05:37.874942  294110 start.go:496] detecting cgroup driver to use...
	I1210 07:05:37.875016  294110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:05:37.896475  294110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:05:37.914318  294110 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:05:37.914399  294110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:05:37.934942  294110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:05:37.953295  294110 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:05:38.114852  294110 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:05:38.332612  294110 docker.go:234] disabling docker service ...
	I1210 07:05:38.332689  294110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:05:38.353540  294110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:05:38.373202  294110 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:05:38.536677  294110 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:05:38.702119  294110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:05:38.720186  294110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:05:38.749869  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:38.909781  294110 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:05:38.909851  294110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:38.924629  294110 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:05:38.924705  294110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:38.938319  294110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:38.952483  294110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:38.966337  294110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:05:38.980908  294110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:38.995766  294110 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:39.021247  294110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:05:39.036099  294110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:05:39.048446  294110 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 07:05:39.048519  294110 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 07:05:39.078291  294110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:05:39.095974  294110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:05:39.258006  294110 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:05:39.394593  294110 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:05:39.394667  294110 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:05:39.401389  294110 start.go:564] Will wait 60s for crictl version
	I1210 07:05:39.401458  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:39.406185  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 07:05:39.450035  294110 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 07:05:39.450133  294110 ssh_runner.go:195] Run: crio --version
	I1210 07:05:39.484666  294110 ssh_runner.go:195] Run: crio --version
	I1210 07:05:39.522170  294110 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.29.1 ...
	I1210 07:05:34.997014  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:35.497136  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:35.997184  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:36.496582  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:36.997154  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:37.497025  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:37.996845  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:38.497334  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:38.996365  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:39.497183  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:39.527366  294110 main.go:143] libmachine: domain no-preload-548860 has defined MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:39.527839  294110 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:d2:9f", ip: ""} in network mk-no-preload-548860: {Iface:virbr4 ExpiryTime:2025-12-10 08:05:33 +0000 UTC Type:0 Mac:52:54:00:6b:d2:9f Iaid: IPaddr:192.168.72.64 Prefix:24 Hostname:no-preload-548860 Clientid:01:52:54:00:6b:d2:9f}
	I1210 07:05:39.527897  294110 main.go:143] libmachine: domain no-preload-548860 has defined IP address 192.168.72.64 and MAC address 52:54:00:6b:d2:9f in network mk-no-preload-548860
	I1210 07:05:39.528144  294110 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1210 07:05:39.534832  294110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:05:39.553608  294110 kubeadm.go:884] updating cluster {Name:no-preload-548860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-rc.1 ClusterName:no-preload-548860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.64 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:05:39.553816  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:39.706508  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:39.858358  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:40.007040  294110 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 07:05:40.007109  294110 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:05:40.047504  294110 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1210 07:05:40.047544  294110 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:05:40.047599  294110 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:40.047942  294110 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:05:40.047970  294110 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.048145  294110 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.048187  294110 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.048309  294110 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.047941  294110 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.048314  294110 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.049994  294110 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.050022  294110 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:40.050023  294110 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.050060  294110 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.050112  294110 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.050128  294110 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.049994  294110 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:05:40.051472  294110 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.173671  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.174561  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.174933  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.182455  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.191008  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.200370  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:05:40.245011  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.329945  294110 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc" in container runtime
	I1210 07:05:40.330007  294110 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.330070  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.381884  294110 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2" in container runtime
	I1210 07:05:40.381904  294110 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1210 07:05:40.381941  294110 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.381941  294110 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.381971  294110 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce" in container runtime
	I1210 07:05:40.381999  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.381999  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.381999  294110 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.382052  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.391682  294110 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614" in container runtime
	I1210 07:05:40.391727  294110 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 07:05:40.391740  294110 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.391760  294110 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:05:40.391813  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.391827  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.407956  294110 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a" in container runtime
	I1210 07:05:40.408034  294110 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.408072  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.408089  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:40.408129  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.408150  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.408190  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.408250  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:05:40.408269  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.426342  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.541129  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.541255  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.550329  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.550497  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.550562  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.582450  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:05:40.607731  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.719268  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:05:40.719288  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:05:40.719320  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:05:40.719358  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 07:05:40.722312  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:05:40.722315  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:05:40.750853  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:05:40.895331  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 07:05:40.895410  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 07:05:40.895432  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1210 07:05:40.895455  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:05:40.895461  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 07:05:40.895534  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 07:05:40.895554  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1210 07:05:40.895567  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 07:05:40.895534  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 07:05:40.895534  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1210 07:05:40.895624  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 07:05:40.895643  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:05:40.895714  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 07:05:40.895807  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 07:05:40.917789  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 07:05:40.917839  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (23144960 bytes)
	I1210 07:05:40.917977  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 07:05:40.918002  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (17248256 bytes)
	I1210 07:05:40.918069  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:05:40.918081  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1210 07:05:40.918097  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (23653376 bytes)
	I1210 07:05:40.918098  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 07:05:40.918148  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 07:05:40.918161  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1210 07:05:40.918239  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 07:05:40.918262  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (25791488 bytes)
	I1210 07:05:40.918331  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 07:05:40.918353  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (27697152 bytes)
	I1210 07:05:41.001453  294110 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:41.101612  294110 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:05:41.101699  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1210 07:05:39.997138  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:40.496354  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:40.997062  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:41.496285  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:41.996480  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:42.497147  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:42.996269  293764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:05:43.278335  293764 kubeadm.go:1114] duration metric: took 12.557086571s to wait for elevateKubeSystemPrivileges
	I1210 07:05:43.278384  293764 kubeadm.go:403] duration metric: took 24.835405162s to StartCluster
	I1210 07:05:43.278412  293764 settings.go:142] acquiring lock: {Name:mkfd19ecbf4d1e6319f3bb5fd2369931dc469304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:43.278502  293764 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 07:05:43.280525  293764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/kubeconfig: {Name:mk89e62df614d075d4d9ba9b9215d18e6c14ff0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:43.280848  293764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:05:43.280848  293764 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.231 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:05:43.281055  293764 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:05:43.281187  293764 config.go:182] Loaded profile config "old-k8s-version-508835": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 07:05:43.281186  293764 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-508835"
	I1210 07:05:43.281213  293764 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-508835"
	I1210 07:05:43.281245  293764 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-508835"
	I1210 07:05:43.281248  293764 host.go:66] Checking if "old-k8s-version-508835" exists ...
	I1210 07:05:43.281261  293764 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-508835"
	I1210 07:05:43.283500  293764 out.go:179] * Verifying Kubernetes components...
	I1210 07:05:43.285470  293764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:05:43.285530  293764 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:38.499407  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:38.500131  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": dial tcp 192.168.83.172:8443: connect: connection refused
	I1210 07:05:38.999583  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:39.000263  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": dial tcp 192.168.83.172:8443: connect: connection refused
	I1210 07:05:39.499612  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:39.500439  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": dial tcp 192.168.83.172:8443: connect: connection refused
	I1210 07:05:39.998972  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:39.999688  292755 api_server.go:269] stopped: https://192.168.83.172:8443/healthz: Get "https://192.168.83.172:8443/healthz": dial tcp 192.168.83.172:8443: connect: connection refused
	I1210 07:05:40.499088  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:43.066862  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 07:05:43.066951  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 07:05:43.066973  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:43.110804  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 07:05:43.110846  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 07:05:43.286991  293764 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:05:43.287057  293764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:05:43.287012  293764 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-508835"
	I1210 07:05:43.287211  293764 host.go:66] Checking if "old-k8s-version-508835" exists ...
	I1210 07:05:43.291267  293764 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:05:43.291354  293764 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:05:43.293396  293764 main.go:143] libmachine: domain old-k8s-version-508835 has defined MAC address 52:54:00:8d:96:98 in network mk-old-k8s-version-508835
	I1210 07:05:43.294706  293764 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:96:98", ip: ""} in network mk-old-k8s-version-508835: {Iface:virbr2 ExpiryTime:2025-12-10 08:05:06 +0000 UTC Type:0 Mac:52:54:00:8d:96:98 Iaid: IPaddr:192.168.50.231 Prefix:24 Hostname:old-k8s-version-508835 Clientid:01:52:54:00:8d:96:98}
	I1210 07:05:43.294753  293764 main.go:143] libmachine: domain old-k8s-version-508835 has defined IP address 192.168.50.231 and MAC address 52:54:00:8d:96:98 in network mk-old-k8s-version-508835
	I1210 07:05:43.295354  293764 sshutil.go:53] new ssh client: &{IP:192.168.50.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/old-k8s-version-508835/id_rsa Username:docker}
	I1210 07:05:43.296944  293764 main.go:143] libmachine: domain old-k8s-version-508835 has defined MAC address 52:54:00:8d:96:98 in network mk-old-k8s-version-508835
	I1210 07:05:43.297560  293764 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:96:98", ip: ""} in network mk-old-k8s-version-508835: {Iface:virbr2 ExpiryTime:2025-12-10 08:05:06 +0000 UTC Type:0 Mac:52:54:00:8d:96:98 Iaid: IPaddr:192.168.50.231 Prefix:24 Hostname:old-k8s-version-508835 Clientid:01:52:54:00:8d:96:98}
	I1210 07:05:43.297596  293764 main.go:143] libmachine: domain old-k8s-version-508835 has defined IP address 192.168.50.231 and MAC address 52:54:00:8d:96:98 in network mk-old-k8s-version-508835
	I1210 07:05:43.297822  293764 sshutil.go:53] new ssh client: &{IP:192.168.50.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/old-k8s-version-508835/id_rsa Username:docker}
	I1210 07:05:43.849526  293764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:05:43.849531  293764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:05:44.053928  293764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:05:44.121429  293764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:05:43.499340  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:43.507135  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:43.507180  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:43.999517  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:44.014644  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:44.014685  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:44.499401  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:44.508865  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:05:44.508906  292755 api_server.go:103] status: https://192.168.83.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:05:44.999629  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:45.006592  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 200:
	ok
	I1210 07:05:45.015651  292755 api_server.go:141] control plane version: v1.34.3
	I1210 07:05:45.015698  292755 api_server.go:131] duration metric: took 57.517173318s to wait for apiserver health ...
	I1210 07:05:45.015720  292755 cni.go:84] Creating CNI manager for ""
	I1210 07:05:45.015733  292755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 07:05:45.017816  292755 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 07:05:45.019229  292755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 07:05:45.040143  292755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 07:05:45.070653  292755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:05:45.079911  292755 system_pods.go:59] 7 kube-system pods found
	I1210 07:05:45.079967  292755 system_pods.go:61] "coredns-66bc5c9577-nnm25" [e290831e-e710-4fc6-9170-9661176ac06f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:45.079978  292755 system_pods.go:61] "coredns-66bc5c9577-qcwf4" [3c3003a3-50a1-4248-a19f-458f0c14923b] Running
	I1210 07:05:45.079990  292755 system_pods.go:61] "etcd-pause-179913" [35c0270f-1f5b-4021-b115-868d55375c8b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:05:45.080009  292755 system_pods.go:61] "kube-apiserver-pause-179913" [43bb2255-bccb-446b-831c-368f2cc51f12] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:05:45.080022  292755 system_pods.go:61] "kube-controller-manager-pause-179913" [88a9da74-5d42-4281-8035-8b888b98724e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:05:45.080033  292755 system_pods.go:61] "kube-proxy-rvmnw" [9d3714c1-16f4-4178-9896-49713556e897] Running
	I1210 07:05:45.080042  292755 system_pods.go:61] "kube-scheduler-pause-179913" [7be03420-22a0-4b6b-832f-5867a771f911] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:05:45.080056  292755 system_pods.go:74] duration metric: took 9.364751ms to wait for pod list to return data ...
	I1210 07:05:45.080070  292755 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:05:45.087191  292755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 07:05:45.087231  292755 node_conditions.go:123] node cpu capacity is 2
	I1210 07:05:45.087262  292755 node_conditions.go:105] duration metric: took 7.181183ms to run NodePressure ...
	I1210 07:05:45.087377  292755 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:05:45.521663  292755 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1210 07:05:45.529152  292755 kubeadm.go:744] kubelet initialised
	I1210 07:05:45.529189  292755 kubeadm.go:745] duration metric: took 7.492666ms waiting for restarted kubelet to initialise ...
	I1210 07:05:45.529220  292755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:05:45.551110  292755 ops.go:34] apiserver oom_adj: -16
	I1210 07:05:45.551148  292755 kubeadm.go:602] duration metric: took 1m22.696983533s to restartPrimaryControlPlane
	I1210 07:05:45.551164  292755 kubeadm.go:403] duration metric: took 1m22.85495137s to StartCluster
	I1210 07:05:45.551190  292755 settings.go:142] acquiring lock: {Name:mkfd19ecbf4d1e6319f3bb5fd2369931dc469304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:45.551288  292755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 07:05:45.553273  292755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/kubeconfig: {Name:mk89e62df614d075d4d9ba9b9215d18e6c14ff0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:45.553697  292755 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.172 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:05:45.553844  292755 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:05:45.554079  292755 config.go:182] Loaded profile config "pause-179913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:05:45.555625  292755 out.go:179] * Verifying Kubernetes components...
	I1210 07:05:45.555642  292755 out.go:179] * Enabled addons: 
	I1210 07:05:41.238442  294110 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 07:05:41.238506  294110 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:41.238576  294110 ssh_runner.go:195] Run: which crictl
	I1210 07:05:41.727941  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1210 07:05:41.728004  294110 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 07:05:41.728072  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 07:05:41.728076  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:44.443584  294110 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (2.715474141s)
	I1210 07:05:44.443633  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1210 07:05:44.443649  294110 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.715498363s)
	I1210 07:05:44.443667  294110 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 07:05:44.443732  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:44.443735  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 07:05:46.088976  293764 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.239317287s)
	I1210 07:05:46.089017  293764 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.239431401s)
	I1210 07:05:46.089055  293764 start.go:977] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1210 07:05:46.090367  293764 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-508835" to be "Ready" ...
	I1210 07:05:46.136096  293764 node_ready.go:49] node "old-k8s-version-508835" is "Ready"
	I1210 07:05:46.136142  293764 node_ready.go:38] duration metric: took 45.736627ms for node "old-k8s-version-508835" to be "Ready" ...
	I1210 07:05:46.136186  293764 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:05:46.136276  293764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:05:46.596354  293764 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-508835" context rescaled to 1 replicas
	I1210 07:05:46.628544  293764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.50706843s)
	I1210 07:05:46.628577  293764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.574595426s)
	I1210 07:05:46.628629  293764 api_server.go:72] duration metric: took 3.347732942s to wait for apiserver process to appear ...
	I1210 07:05:46.628648  293764 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:05:46.628672  293764 api_server.go:253] Checking apiserver healthz at https://192.168.50.231:8443/healthz ...
	I1210 07:05:46.645038  293764 api_server.go:279] https://192.168.50.231:8443/healthz returned 200:
	ok
	I1210 07:05:46.647241  293764 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 07:05:45.557500  292755 addons.go:530] duration metric: took 3.655629ms for enable addons: enabled=[]
	I1210 07:05:45.557511  292755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:05:45.868638  292755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:05:45.904126  292755 node_ready.go:35] waiting up to 6m0s for node "pause-179913" to be "Ready" ...
	I1210 07:05:45.911742  292755 node_ready.go:49] node "pause-179913" is "Ready"
	I1210 07:05:45.911782  292755 node_ready.go:38] duration metric: took 7.609763ms for node "pause-179913" to be "Ready" ...
	I1210 07:05:45.911803  292755 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:05:45.911894  292755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:05:45.939769  292755 api_server.go:72] duration metric: took 386.01566ms to wait for apiserver process to appear ...
	I1210 07:05:45.939810  292755 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:05:45.939836  292755 api_server.go:253] Checking apiserver healthz at https://192.168.83.172:8443/healthz ...
	I1210 07:05:45.946763  292755 api_server.go:279] https://192.168.83.172:8443/healthz returned 200:
	ok
	I1210 07:05:45.948279  292755 api_server.go:141] control plane version: v1.34.3
	I1210 07:05:45.948308  292755 api_server.go:131] duration metric: took 8.488476ms to wait for apiserver health ...
	I1210 07:05:45.948322  292755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:05:45.955449  292755 system_pods.go:59] 7 kube-system pods found
	I1210 07:05:45.955490  292755 system_pods.go:61] "coredns-66bc5c9577-nnm25" [e290831e-e710-4fc6-9170-9661176ac06f] Running
	I1210 07:05:45.955498  292755 system_pods.go:61] "coredns-66bc5c9577-qcwf4" [3c3003a3-50a1-4248-a19f-458f0c14923b] Running
	I1210 07:05:45.955507  292755 system_pods.go:61] "etcd-pause-179913" [35c0270f-1f5b-4021-b115-868d55375c8b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:05:45.955514  292755 system_pods.go:61] "kube-apiserver-pause-179913" [43bb2255-bccb-446b-831c-368f2cc51f12] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:05:45.955528  292755 system_pods.go:61] "kube-controller-manager-pause-179913" [88a9da74-5d42-4281-8035-8b888b98724e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:05:45.955533  292755 system_pods.go:61] "kube-proxy-rvmnw" [9d3714c1-16f4-4178-9896-49713556e897] Running
	I1210 07:05:45.955542  292755 system_pods.go:61] "kube-scheduler-pause-179913" [7be03420-22a0-4b6b-832f-5867a771f911] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:05:45.955552  292755 system_pods.go:74] duration metric: took 7.220379ms to wait for pod list to return data ...
	I1210 07:05:45.955569  292755 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:05:45.963188  292755 default_sa.go:45] found service account: "default"
	I1210 07:05:45.963230  292755 default_sa.go:55] duration metric: took 7.652247ms for default service account to be created ...
	I1210 07:05:45.963246  292755 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:05:45.968360  292755 system_pods.go:86] 7 kube-system pods found
	I1210 07:05:45.968396  292755 system_pods.go:89] "coredns-66bc5c9577-nnm25" [e290831e-e710-4fc6-9170-9661176ac06f] Running
	I1210 07:05:45.968406  292755 system_pods.go:89] "coredns-66bc5c9577-qcwf4" [3c3003a3-50a1-4248-a19f-458f0c14923b] Running
	I1210 07:05:45.968417  292755 system_pods.go:89] "etcd-pause-179913" [35c0270f-1f5b-4021-b115-868d55375c8b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:05:45.968427  292755 system_pods.go:89] "kube-apiserver-pause-179913" [43bb2255-bccb-446b-831c-368f2cc51f12] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:05:45.968439  292755 system_pods.go:89] "kube-controller-manager-pause-179913" [88a9da74-5d42-4281-8035-8b888b98724e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:05:45.968444  292755 system_pods.go:89] "kube-proxy-rvmnw" [9d3714c1-16f4-4178-9896-49713556e897] Running
	I1210 07:05:45.968453  292755 system_pods.go:89] "kube-scheduler-pause-179913" [7be03420-22a0-4b6b-832f-5867a771f911] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:05:45.968464  292755 system_pods.go:126] duration metric: took 5.209005ms to wait for k8s-apps to be running ...
	I1210 07:05:45.968474  292755 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:05:45.968540  292755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:05:45.996916  292755 system_svc.go:56] duration metric: took 28.425073ms WaitForService to wait for kubelet
	I1210 07:05:45.996951  292755 kubeadm.go:587] duration metric: took 443.205596ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:05:45.996969  292755 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:05:46.002477  292755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 07:05:46.002514  292755 node_conditions.go:123] node cpu capacity is 2
	I1210 07:05:46.002541  292755 node_conditions.go:105] duration metric: took 5.566098ms to run NodePressure ...
	I1210 07:05:46.002562  292755 start.go:242] waiting for startup goroutines ...
	I1210 07:05:46.002573  292755 start.go:247] waiting for cluster config update ...
	I1210 07:05:46.002584  292755 start.go:256] writing updated cluster config ...
	I1210 07:05:46.003020  292755 ssh_runner.go:195] Run: rm -f paused
	I1210 07:05:46.013646  292755 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:05:46.014701  292755 kapi.go:59] client config for pause-179913: &rest.Config{Host:"https://192.168.83.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-243461/.minikube/profiles/pause-179913/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-243461/.minikube/profiles/pause-179913/client.key", CAFile:"/home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:05:46.019617  292755 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nnm25" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:46.028451  292755 pod_ready.go:94] pod "coredns-66bc5c9577-nnm25" is "Ready"
	I1210 07:05:46.028495  292755 pod_ready.go:86] duration metric: took 8.845469ms for pod "coredns-66bc5c9577-nnm25" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:46.028511  292755 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qcwf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:46.045384  292755 pod_ready.go:94] pod "coredns-66bc5c9577-qcwf4" is "Ready"
	I1210 07:05:46.045431  292755 pod_ready.go:86] duration metric: took 16.910963ms for pod "coredns-66bc5c9577-qcwf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:46.050198  292755 pod_ready.go:83] waiting for pod "etcd-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:05:48.060020  292755 pod_ready.go:104] pod "etcd-pause-179913" is not "Ready", error: <nil>
	I1210 07:05:46.647365  293764 api_server.go:141] control plane version: v1.28.0
	I1210 07:05:46.647402  293764 api_server.go:131] duration metric: took 18.739825ms to wait for apiserver health ...
	I1210 07:05:46.647416  293764 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:05:46.648561  293764 addons.go:530] duration metric: took 3.367506503s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 07:05:46.654059  293764 system_pods.go:59] 8 kube-system pods found
	I1210 07:05:46.654121  293764 system_pods.go:61] "coredns-5dd5756b68-bl5vd" [64e50760-ecaf-446d-b6e6-65a28a8484c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:46.654137  293764 system_pods.go:61] "coredns-5dd5756b68-gj27p" [c51a8f37-b1ba-4d1e-9af8-b33b4c20009e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:46.654146  293764 system_pods.go:61] "etcd-old-k8s-version-508835" [b8fadbf6-8e6c-47ac-8392-a48900a6b56f] Running
	I1210 07:05:46.654152  293764 system_pods.go:61] "kube-apiserver-old-k8s-version-508835" [4a755f71-dcf8-4954-8e7f-9992863006af] Running
	I1210 07:05:46.654159  293764 system_pods.go:61] "kube-controller-manager-old-k8s-version-508835" [3f001d72-13ca-48fc-b8a4-a03abc797ece] Running
	I1210 07:05:46.654178  293764 system_pods.go:61] "kube-proxy-d2m7p" [fe67c773-3e41-4687-b8cf-e180ee486e76] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:05:46.654188  293764 system_pods.go:61] "kube-scheduler-old-k8s-version-508835" [45cf3d01-ceb3-4e1a-8704-3a3d9ec53721] Running
	I1210 07:05:46.654195  293764 system_pods.go:61] "storage-provisioner" [15fa6465-6c07-452f-b9b1-3284b48b4d20] Pending
	I1210 07:05:46.654206  293764 system_pods.go:74] duration metric: took 6.781989ms to wait for pod list to return data ...
	I1210 07:05:46.654221  293764 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:05:46.664597  293764 default_sa.go:45] found service account: "default"
	I1210 07:05:46.664647  293764 default_sa.go:55] duration metric: took 10.414638ms for default service account to be created ...
	I1210 07:05:46.664664  293764 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:05:46.670689  293764 system_pods.go:86] 8 kube-system pods found
	I1210 07:05:46.670748  293764 system_pods.go:89] "coredns-5dd5756b68-bl5vd" [64e50760-ecaf-446d-b6e6-65a28a8484c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:46.670759  293764 system_pods.go:89] "coredns-5dd5756b68-gj27p" [c51a8f37-b1ba-4d1e-9af8-b33b4c20009e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:46.670770  293764 system_pods.go:89] "etcd-old-k8s-version-508835" [b8fadbf6-8e6c-47ac-8392-a48900a6b56f] Running
	I1210 07:05:46.670778  293764 system_pods.go:89] "kube-apiserver-old-k8s-version-508835" [4a755f71-dcf8-4954-8e7f-9992863006af] Running
	I1210 07:05:46.670784  293764 system_pods.go:89] "kube-controller-manager-old-k8s-version-508835" [3f001d72-13ca-48fc-b8a4-a03abc797ece] Running
	I1210 07:05:46.670793  293764 system_pods.go:89] "kube-proxy-d2m7p" [fe67c773-3e41-4687-b8cf-e180ee486e76] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:05:46.670799  293764 system_pods.go:89] "kube-scheduler-old-k8s-version-508835" [45cf3d01-ceb3-4e1a-8704-3a3d9ec53721] Running
	I1210 07:05:46.670808  293764 system_pods.go:89] "storage-provisioner" [15fa6465-6c07-452f-b9b1-3284b48b4d20] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:05:46.670836  293764 retry.go:31] will retry after 275.201281ms: missing components: kube-dns, kube-proxy
	I1210 07:05:46.958117  293764 system_pods.go:86] 8 kube-system pods found
	I1210 07:05:46.958171  293764 system_pods.go:89] "coredns-5dd5756b68-bl5vd" [64e50760-ecaf-446d-b6e6-65a28a8484c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:46.958185  293764 system_pods.go:89] "coredns-5dd5756b68-gj27p" [c51a8f37-b1ba-4d1e-9af8-b33b4c20009e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:46.958201  293764 system_pods.go:89] "etcd-old-k8s-version-508835" [b8fadbf6-8e6c-47ac-8392-a48900a6b56f] Running
	I1210 07:05:46.958210  293764 system_pods.go:89] "kube-apiserver-old-k8s-version-508835" [4a755f71-dcf8-4954-8e7f-9992863006af] Running
	I1210 07:05:46.958217  293764 system_pods.go:89] "kube-controller-manager-old-k8s-version-508835" [3f001d72-13ca-48fc-b8a4-a03abc797ece] Running
	I1210 07:05:46.958226  293764 system_pods.go:89] "kube-proxy-d2m7p" [fe67c773-3e41-4687-b8cf-e180ee486e76] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:05:46.958238  293764 system_pods.go:89] "kube-scheduler-old-k8s-version-508835" [45cf3d01-ceb3-4e1a-8704-3a3d9ec53721] Running
	I1210 07:05:46.958248  293764 system_pods.go:89] "storage-provisioner" [15fa6465-6c07-452f-b9b1-3284b48b4d20] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:05:46.958274  293764 retry.go:31] will retry after 291.602863ms: missing components: kube-dns, kube-proxy
	I1210 07:05:47.257153  293764 system_pods.go:86] 8 kube-system pods found
	I1210 07:05:47.257204  293764 system_pods.go:89] "coredns-5dd5756b68-bl5vd" [64e50760-ecaf-446d-b6e6-65a28a8484c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:47.257220  293764 system_pods.go:89] "coredns-5dd5756b68-gj27p" [c51a8f37-b1ba-4d1e-9af8-b33b4c20009e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:05:47.257232  293764 system_pods.go:89] "etcd-old-k8s-version-508835" [b8fadbf6-8e6c-47ac-8392-a48900a6b56f] Running
	I1210 07:05:47.257239  293764 system_pods.go:89] "kube-apiserver-old-k8s-version-508835" [4a755f71-dcf8-4954-8e7f-9992863006af] Running
	I1210 07:05:47.257245  293764 system_pods.go:89] "kube-controller-manager-old-k8s-version-508835" [3f001d72-13ca-48fc-b8a4-a03abc797ece] Running
	I1210 07:05:47.257253  293764 system_pods.go:89] "kube-proxy-d2m7p" [fe67c773-3e41-4687-b8cf-e180ee486e76] Running
	I1210 07:05:47.257257  293764 system_pods.go:89] "kube-scheduler-old-k8s-version-508835" [45cf3d01-ceb3-4e1a-8704-3a3d9ec53721] Running
	I1210 07:05:47.257275  293764 system_pods.go:89] "storage-provisioner" [15fa6465-6c07-452f-b9b1-3284b48b4d20] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:05:47.257290  293764 system_pods.go:126] duration metric: took 592.616953ms to wait for k8s-apps to be running ...
	I1210 07:05:47.257305  293764 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:05:47.257367  293764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:05:47.287194  293764 system_svc.go:56] duration metric: took 29.877591ms WaitForService to wait for kubelet
	I1210 07:05:47.287236  293764 kubeadm.go:587] duration metric: took 4.006339822s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:05:47.287301  293764 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:05:47.291237  293764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 07:05:47.291274  293764 node_conditions.go:123] node cpu capacity is 2
	I1210 07:05:47.291295  293764 node_conditions.go:105] duration metric: took 3.977687ms to run NodePressure ...
	I1210 07:05:47.291313  293764 start.go:242] waiting for startup goroutines ...
	I1210 07:05:47.291325  293764 start.go:247] waiting for cluster config update ...
	I1210 07:05:47.291345  293764 start.go:256] writing updated cluster config ...
	I1210 07:05:47.291764  293764 ssh_runner.go:195] Run: rm -f paused
	I1210 07:05:47.300101  293764 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:05:47.307229  293764 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bl5vd" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:05:49.316287  293764 pod_ready.go:104] pod "coredns-5dd5756b68-bl5vd" is not "Ready", error: <nil>
	I1210 07:05:46.636984  294110 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (2.19313391s)
	I1210 07:05:46.637028  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 07:05:46.637041  294110 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.193276255s)
	I1210 07:05:46.637076  294110 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:05:46.637121  294110 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:05:46.637140  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:05:48.624381  294110 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.987206786s)
	I1210 07:05:48.624426  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 07:05:48.624420  294110 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.987278214s)
	I1210 07:05:48.624460  294110 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 07:05:48.624464  294110 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 07:05:48.624504  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 07:05:48.624554  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:05:50.297711  294110 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.673133719s)
	I1210 07:05:50.297749  294110 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:05:50.297772  294110 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.6732374s)
	I1210 07:05:50.297811  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1210 07:05:50.297860  294110 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1210 07:05:50.297775  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 07:05:50.297985  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1210 07:05:49.558669  292755 pod_ready.go:94] pod "etcd-pause-179913" is "Ready"
	I1210 07:05:49.558712  292755 pod_ready.go:86] duration metric: took 3.508461655s for pod "etcd-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:49.570978  292755 pod_ready.go:83] waiting for pod "kube-apiserver-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:05:51.579940  292755 pod_ready.go:104] pod "kube-apiserver-pause-179913" is not "Ready", error: <nil>
	W1210 07:05:51.816055  293764 pod_ready.go:104] pod "coredns-5dd5756b68-bl5vd" is not "Ready", error: <nil>
	W1210 07:05:54.314524  293764 pod_ready.go:104] pod "coredns-5dd5756b68-bl5vd" is not "Ready", error: <nil>
	I1210 07:05:52.974191  294110 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (2.676158812s)
	I1210 07:05:52.974233  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1210 07:05:52.974289  294110 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 07:05:52.974364  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 07:05:55.032742  294110 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (2.058339316s)
	I1210 07:05:55.032788  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1210 07:05:55.032820  294110 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:05:55.032905  294110 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:05:55.790037  294110 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-243461/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 07:05:55.790084  294110 cache_images.go:125] Successfully loaded all cached images
	I1210 07:05:55.790090  294110 cache_images.go:94] duration metric: took 15.742526615s to LoadCachedImages
	I1210 07:05:55.790106  294110 kubeadm.go:935] updating node { 192.168.72.64 8443 v1.35.0-rc.1 crio true true} ...
	I1210 07:05:55.790215  294110 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-548860 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-548860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:05:55.790313  294110 ssh_runner.go:195] Run: crio config
	I1210 07:05:55.848041  294110 cni.go:84] Creating CNI manager for ""
	I1210 07:05:55.848090  294110 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 07:05:55.848120  294110 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:05:55.848153  294110 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.64 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-548860 NodeName:no-preload-548860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:05:55.848333  294110 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-548860"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.64"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.64"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:05:55.848441  294110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:05:55.864032  294110 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 07:05:55.864106  294110 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:05:55.880330  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
	I1210 07:05:55.880429  294110 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl.sha256
	I1210 07:05:55.880448  294110 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet
	I1210 07:05:55.880502  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 07:05:55.880525  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 07:05:55.887791  294110 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 07:05:55.887836  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (58597560 bytes)
	I1210 07:05:55.888154  294110 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 07:05:55.888216  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (72368312 bytes)
	W1210 07:05:54.084084  292755 pod_ready.go:104] pod "kube-apiserver-pause-179913" is not "Ready", error: <nil>
	W1210 07:05:56.579205  292755 pod_ready.go:104] pod "kube-apiserver-pause-179913" is not "Ready", error: <nil>
	I1210 07:05:56.632450  294110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:05:56.651761  294110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 07:05:56.657644  294110 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 07:05:56.657694  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (58110244 bytes)
	I1210 07:05:56.977184  294110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:05:56.994355  294110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1210 07:05:57.020230  294110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:05:57.044542  294110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1210 07:05:57.071132  294110 ssh_runner.go:195] Run: grep 192.168.72.64	control-plane.minikube.internal$ /etc/hosts
	I1210 07:05:57.078585  294110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:05:57.100719  294110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:05:57.273820  294110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:05:57.313810  294110 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860 for IP: 192.168.72.64
	I1210 07:05:57.313836  294110 certs.go:195] generating shared ca certs ...
	I1210 07:05:57.313857  294110 certs.go:227] acquiring lock for ca certs: {Name:mk2c8c8bbc628186be8cfd9c613269482a34a061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.314101  294110 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key
	I1210 07:05:57.314149  294110 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key
	I1210 07:05:57.314162  294110 certs.go:257] generating profile certs ...
	I1210 07:05:57.314240  294110 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.key
	I1210 07:05:57.314255  294110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.crt with IP's: []
	I1210 07:05:57.386914  294110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.crt ...
	I1210 07:05:57.386948  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.crt: {Name:mk466ed40cc010bd20e3989ea8bea4b4ef4cd073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.387140  294110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.key ...
	I1210 07:05:57.387151  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.key: {Name:mk8760fbb8db80630c1d9e63702eb572aa8256a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.387237  294110 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.key.e8899670
	I1210 07:05:57.387252  294110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.crt.e8899670 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.64]
	I1210 07:05:57.414920  294110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.crt.e8899670 ...
	I1210 07:05:57.414950  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.crt.e8899670: {Name:mk6430d3e58dc86adff6ff0de0dd0fefac0b0a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.415123  294110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.key.e8899670 ...
	I1210 07:05:57.415139  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.key.e8899670: {Name:mkae2382e0450b2cd3c8cfb56e9465e4c1b5ae33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.415223  294110 certs.go:382] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.crt.e8899670 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.crt
	I1210 07:05:57.415295  294110 certs.go:386] copying /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.key.e8899670 -> /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.key
	I1210 07:05:57.415353  294110 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.key
	I1210 07:05:57.415369  294110 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.crt with IP's: []
	I1210 07:05:57.539953  294110 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.crt ...
	I1210 07:05:57.539986  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.crt: {Name:mkaf5ae2e0f916e1d768e22c989c83a2b243ccc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.540173  294110 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.key ...
	I1210 07:05:57.540196  294110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.key: {Name:mk41dd84cb64b6d4f260f1ee218c4c81b62b6b32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:05:57.540381  294110 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366.pem (1338 bytes)
	W1210 07:05:57.540441  294110 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366_empty.pem, impossibly tiny 0 bytes
	I1210 07:05:57.540457  294110 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:05:57.540505  294110 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:05:57.540560  294110 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:05:57.540601  294110 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/certs/key.pem (1675 bytes)
	I1210 07:05:57.540664  294110 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem (1708 bytes)
	I1210 07:05:57.541340  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:05:57.579801  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:05:57.618035  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:05:57.653947  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:05:57.688863  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:05:57.724149  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:05:57.759600  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:05:57.794961  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:05:57.831752  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/ssl/certs/2473662.pem --> /usr/share/ca-certificates/2473662.pem (1708 bytes)
	I1210 07:05:57.870309  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:05:57.906615  294110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-243461/.minikube/certs/247366.pem --> /usr/share/ca-certificates/247366.pem (1338 bytes)
	I1210 07:05:57.942207  294110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:05:57.966558  294110 ssh_runner.go:195] Run: openssl version
	I1210 07:05:57.974433  294110 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/247366.pem
	I1210 07:05:57.989402  294110 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/247366.pem /etc/ssl/certs/247366.pem
	I1210 07:05:58.004911  294110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247366.pem
	I1210 07:05:58.011298  294110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:59 /usr/share/ca-certificates/247366.pem
	I1210 07:05:58.011367  294110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247366.pem
	I1210 07:05:58.023069  294110 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:05:58.041279  294110 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/247366.pem /etc/ssl/certs/51391683.0
	I1210 07:05:58.060692  294110 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2473662.pem
	I1210 07:05:58.078676  294110 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2473662.pem /etc/ssl/certs/2473662.pem
	I1210 07:05:58.093713  294110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2473662.pem
	I1210 07:05:58.100033  294110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:59 /usr/share/ca-certificates/2473662.pem
	I1210 07:05:58.100101  294110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2473662.pem
	I1210 07:05:58.107912  294110 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:05:58.120976  294110 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2473662.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:05:58.134042  294110 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:58.147702  294110 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:05:58.163857  294110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:58.169936  294110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:58.170004  294110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:05:58.177959  294110 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:05:58.193848  294110 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:05:58.206987  294110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:05:58.212249  294110 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:05:58.212311  294110 kubeadm.go:401] StartCluster: {Name:no-preload-548860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-rc.1 ClusterName:no-preload-548860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.64 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:05:58.212384  294110 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:05:58.212453  294110 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:05:58.260948  294110 cri.go:89] found id: ""
	I1210 07:05:58.261030  294110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:05:58.274813  294110 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:05:58.288739  294110 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:05:58.301950  294110 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:05:58.301977  294110 kubeadm.go:158] found existing configuration files:
	
	I1210 07:05:58.302032  294110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:05:58.314615  294110 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:05:58.314707  294110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:05:58.328609  294110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:05:58.340359  294110 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:05:58.340422  294110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:05:58.355231  294110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:05:58.368650  294110 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:05:58.368708  294110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:05:58.382022  294110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:05:58.395441  294110 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:05:58.395528  294110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:05:58.410563  294110 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 07:05:58.470728  294110 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:05:58.470783  294110 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:05:58.647776  294110 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:05:58.648000  294110 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:05:58.648167  294110 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:05:58.672800  294110 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1210 07:05:58.579566  292755 pod_ready.go:104] pod "kube-apiserver-pause-179913" is not "Ready", error: <nil>
	I1210 07:05:59.577705  292755 pod_ready.go:94] pod "kube-apiserver-pause-179913" is "Ready"
	I1210 07:05:59.577735  292755 pod_ready.go:86] duration metric: took 10.006723147s for pod "kube-apiserver-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.580798  292755 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.585440  292755 pod_ready.go:94] pod "kube-controller-manager-pause-179913" is "Ready"
	I1210 07:05:59.585465  292755 pod_ready.go:86] duration metric: took 4.642896ms for pod "kube-controller-manager-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.588130  292755 pod_ready.go:83] waiting for pod "kube-proxy-rvmnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.592715  292755 pod_ready.go:94] pod "kube-proxy-rvmnw" is "Ready"
	I1210 07:05:59.592750  292755 pod_ready.go:86] duration metric: took 4.593394ms for pod "kube-proxy-rvmnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.594803  292755 pod_ready.go:83] waiting for pod "kube-scheduler-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.775853  292755 pod_ready.go:94] pod "kube-scheduler-pause-179913" is "Ready"
	I1210 07:05:59.775899  292755 pod_ready.go:86] duration metric: took 181.065401ms for pod "kube-scheduler-pause-179913" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:05:59.775916  292755 pod_ready.go:40] duration metric: took 13.762227751s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:05:59.828436  292755 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 07:05:59.830479  292755 out.go:179] * Done! kubectl is now configured to use "pause-179913" cluster and "default" namespace by default
	W1210 07:05:56.319387  293764 pod_ready.go:104] pod "coredns-5dd5756b68-bl5vd" is not "Ready", error: <nil>
	W1210 07:05:58.814459  293764 pod_ready.go:104] pod "coredns-5dd5756b68-bl5vd" is not "Ready", error: <nil>
	I1210 07:05:58.674844  294110 out.go:252]   - Generating certificates and keys ...
	I1210 07:05:58.674965  294110 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:05:58.675081  294110 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:05:58.951251  294110 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:05:59.108867  294110 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:05:59.267601  294110 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:05:59.300867  294110 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:05:59.383313  294110 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:05:59.383494  294110 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-548860] and IPs [192.168.72.64 127.0.0.1 ::1]
	I1210 07:05:59.438312  294110 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:05:59.438516  294110 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-548860] and IPs [192.168.72.64 127.0.0.1 ::1]
	I1210 07:05:59.630414  294110 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:05:59.842084  294110 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:05:59.991146  294110 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:05:59.991522  294110 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:06:00.053116  294110 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:06:00.216675  294110 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:06:00.313266  294110 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:06:00.334134  294110 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:06:00.392774  294110 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:06:00.393746  294110 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:06:00.402715  294110 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:06:00.404683  294110 out.go:252]   - Booting up control plane ...
	I1210 07:06:00.404814  294110 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:06:00.404987  294110 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:06:00.405120  294110 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:06:00.442055  294110 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:06:00.442187  294110 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:06:00.451151  294110 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:06:00.451461  294110 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:06:00.451536  294110 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:06:00.678505  294110 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:06:00.678718  294110 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:06:01.178954  294110 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.912472ms
	I1210 07:06:01.187287  294110 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:06:01.187491  294110 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.72.64:8443/livez
	I1210 07:06:01.187644  294110 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:06:01.187765  294110 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.482440150Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765350363482407430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:94590,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f085b656-5aa9-4058-8edf-0a8d4aaa9c15 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.483969467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51ae071e-3087-4773-8aaf-be82b330fedf name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.484053235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51ae071e-3087-4773-8aaf-be82b330fedf name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.485043432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3f82629cddb72732962082ac5437805942f88e41649e039cb4d0c94e5fe9869,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765350343586873794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb74559cc46038b162194361f011fcc18ddb25a581d26c9989eba05f2e2ac410,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343626062885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c72d4cfa68cc80aa133569590e8d26ba690044cc5f7448f2a20dfc957858336,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343570691572,Labels:map[string]string{io.kubernetes.container.name:
coredns,io.kubernetes.pod.name: coredns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0fef2ee087eaf1645a535bd5e565cb7e4276e629e2a92bd310b9c565f388e7,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765350340069651864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:492bd86865d3bf6e3fd652b744da85dc827039833cf89b2d1c37c5c06e452d
94,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765350340057924659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:2efb4548b8c28445c05d98cdf3122b7a7acb919a34b0832806fc3bd6eafbd2b6,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765350339140891813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93385a5ea66dfb22d59cb4438f95b57031a2904e9201d0c5593ddb98cc2ff788,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765350281745635630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6458c094802a443cc98268f45c071f7ee251f8ddef7849f9d31a4d3bf71726,PodSandboxId:2061c7ea68bf89e14ee38cfad841b239b98398d15dd36b780a4584e80a2ee08e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765350277751476098,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0431ea1f78207f3fe3a3d98e4cf18589c6a270226e73b99835c88ee5df60b745,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1765350274734869551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ae0c7983bb05aad73287267b4fe1dec8f7753c19691f036ae5eb30c9a53e71,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261817030439,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d19e592c4f7ec5a3e5ebe3d5a7146ad75c9185238096e24895d73cb4cecea08,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc
9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261578952454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d319ed6655edf622e672cad35dae95b38725e61f86172552ed36c899e5f0b86,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765350260740092879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:874494df45e73978d6466ad43ce8a528791399e10087dc8d328fa73f04738b76,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765350260588773554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edd69d2810bce69fcfa37e6d911b6ef0ca221e589da9070b85d1fe9098e2ccf,PodSandboxId:de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765350258209516716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10
259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51ae071e-3087-4773-8aaf-be82b330fedf name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.539657888Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96d31954-3637-4cb8-afb8-f7e8e95f10c8 name=/runtime.v1.RuntimeService/Version
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.539930116Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96d31954-3637-4cb8-afb8-f7e8e95f10c8 name=/runtime.v1.RuntimeService/Version
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.541415468Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7f928df-43f3-4a22-a416-219064b02914 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.541993783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765350363541961231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:94590,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7f928df-43f3-4a22-a416-219064b02914 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.543524756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69bd2046-f8c7-43f3-bd2d-23384901a641 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.543920886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69bd2046-f8c7-43f3-bd2d-23384901a641 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.544407252Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3f82629cddb72732962082ac5437805942f88e41649e039cb4d0c94e5fe9869,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765350343586873794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb74559cc46038b162194361f011fcc18ddb25a581d26c9989eba05f2e2ac410,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343626062885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c72d4cfa68cc80aa133569590e8d26ba690044cc5f7448f2a20dfc957858336,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343570691572,Labels:map[string]string{io.kubernetes.container.name:
coredns,io.kubernetes.pod.name: coredns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0fef2ee087eaf1645a535bd5e565cb7e4276e629e2a92bd310b9c565f388e7,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765350340069651864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:492bd86865d3bf6e3fd652b744da85dc827039833cf89b2d1c37c5c06e452d
94,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765350340057924659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:2efb4548b8c28445c05d98cdf3122b7a7acb919a34b0832806fc3bd6eafbd2b6,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765350339140891813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93385a5ea66dfb22d59cb4438f95b57031a2904e9201d0c5593ddb98cc2ff788,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765350281745635630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6458c094802a443cc98268f45c071f7ee251f8ddef7849f9d31a4d3bf71726,PodSandboxId:2061c7ea68bf89e14ee38cfad841b239b98398d15dd36b780a4584e80a2ee08e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765350277751476098,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0431ea1f78207f3fe3a3d98e4cf18589c6a270226e73b99835c88ee5df60b745,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1765350274734869551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ae0c7983bb05aad73287267b4fe1dec8f7753c19691f036ae5eb30c9a53e71,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261817030439,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d19e592c4f7ec5a3e5ebe3d5a7146ad75c9185238096e24895d73cb4cecea08,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc
9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261578952454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d319ed6655edf622e672cad35dae95b38725e61f86172552ed36c899e5f0b86,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765350260740092879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:874494df45e73978d6466ad43ce8a528791399e10087dc8d328fa73f04738b76,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765350260588773554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edd69d2810bce69fcfa37e6d911b6ef0ca221e589da9070b85d1fe9098e2ccf,PodSandboxId:de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765350258209516716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10
259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69bd2046-f8c7-43f3-bd2d-23384901a641 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.601398750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08fb436b-0a62-487e-a949-701c6eb9a191 name=/runtime.v1.RuntimeService/Version
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.601480638Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08fb436b-0a62-487e-a949-701c6eb9a191 name=/runtime.v1.RuntimeService/Version
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.603298326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db98a9b3-1e5a-40ae-9684-4ca7fa366c54 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.603770711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765350363603711050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:94590,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db98a9b3-1e5a-40ae-9684-4ca7fa366c54 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.604630437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=627d88e4-c257-42bd-bfbc-4d61a4718c9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.604707828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=627d88e4-c257-42bd-bfbc-4d61a4718c9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.606130386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3f82629cddb72732962082ac5437805942f88e41649e039cb4d0c94e5fe9869,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765350343586873794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb74559cc46038b162194361f011fcc18ddb25a581d26c9989eba05f2e2ac410,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343626062885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c72d4cfa68cc80aa133569590e8d26ba690044cc5f7448f2a20dfc957858336,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343570691572,Labels:map[string]string{io.kubernetes.container.name:
coredns,io.kubernetes.pod.name: coredns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0fef2ee087eaf1645a535bd5e565cb7e4276e629e2a92bd310b9c565f388e7,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765350340069651864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:492bd86865d3bf6e3fd652b744da85dc827039833cf89b2d1c37c5c06e452d
94,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765350340057924659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:2efb4548b8c28445c05d98cdf3122b7a7acb919a34b0832806fc3bd6eafbd2b6,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765350339140891813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93385a5ea66dfb22d59cb4438f95b57031a2904e9201d0c5593ddb98cc2ff788,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765350281745635630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6458c094802a443cc98268f45c071f7ee251f8ddef7849f9d31a4d3bf71726,PodSandboxId:2061c7ea68bf89e14ee38cfad841b239b98398d15dd36b780a4584e80a2ee08e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765350277751476098,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0431ea1f78207f3fe3a3d98e4cf18589c6a270226e73b99835c88ee5df60b745,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1765350274734869551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ae0c7983bb05aad73287267b4fe1dec8f7753c19691f036ae5eb30c9a53e71,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261817030439,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d19e592c4f7ec5a3e5ebe3d5a7146ad75c9185238096e24895d73cb4cecea08,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc
9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261578952454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d319ed6655edf622e672cad35dae95b38725e61f86172552ed36c899e5f0b86,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765350260740092879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:874494df45e73978d6466ad43ce8a528791399e10087dc8d328fa73f04738b76,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765350260588773554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edd69d2810bce69fcfa37e6d911b6ef0ca221e589da9070b85d1fe9098e2ccf,PodSandboxId:de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765350258209516716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10
259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=627d88e4-c257-42bd-bfbc-4d61a4718c9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.656816048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc8e3ea4-d83c-4e5e-83b4-cd7adfa6efb4 name=/runtime.v1.RuntimeService/Version
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.657131888Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc8e3ea4-d83c-4e5e-83b4-cd7adfa6efb4 name=/runtime.v1.RuntimeService/Version
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.659153311Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=005db25f-bc94-48fb-9555-e4cd0162f55b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.659486356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765350363659464809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:94590,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=005db25f-bc94-48fb-9555-e4cd0162f55b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.661262259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0322f2d7-8b98-488a-815e-c5fa5b4ff4e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.661347568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0322f2d7-8b98-488a-815e-c5fa5b4ff4e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 07:06:03 pause-179913 crio[4102]: time="2025-12-10 07:06:03.661867763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3f82629cddb72732962082ac5437805942f88e41649e039cb4d0c94e5fe9869,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765350343586873794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb74559cc46038b162194361f011fcc18ddb25a581d26c9989eba05f2e2ac410,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343626062885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c72d4cfa68cc80aa133569590e8d26ba690044cc5f7448f2a20dfc957858336,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765350343570691572,Labels:map[string]string{io.kubernetes.container.name:
coredns,io.kubernetes.pod.name: coredns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0fef2ee087eaf1645a535bd5e565cb7e4276e629e2a92bd310b9c565f388e7,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:3,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765350340069651864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:492bd86865d3bf6e3fd652b744da85dc827039833cf89b2d1c37c5c06e452d
94,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765350340057924659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:2efb4548b8c28445c05d98cdf3122b7a7acb919a34b0832806fc3bd6eafbd2b6,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765350339140891813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93385a5ea66dfb22d59cb4438f95b57031a2904e9201d0c5593ddb98cc2ff788,PodSandboxId:99b0eb3a927dd8bc0d3f78a6e7bb4a2540ea53f84919fb76484fb8d2adcac737,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765350281745635630,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938741addc56bf029154c923341405b9,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6458c094802a443cc98268f45c071f7ee251f8ddef7849f9d31a4d3bf71726,PodSandboxId:2061c7ea68bf89e14ee38cfad841b239b98398d15dd36b780a4584e80a2ee08e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765350277751476098,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0431ea1f78207f3fe3a3d98e4cf18589c6a270226e73b99835c88ee5df60b745,PodSandboxId:fcc77b197ab7ba0974402e0e7591b6dfd4c3f27337e91bb4b0b5bee6089f384d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_EXITED,CreatedAt:1765350274734869551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3720f22a5a575d6f6cca10f29e0903b1,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ae0c7983bb05aad73287267b4fe1dec8f7753c19691f036ae5eb30c9a53e71,PodSandboxId:90fe1e0a09e5752a8f787b5b6aad437211432f074d5a3e02ad9bb392a4b71f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261817030439,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-66bc5c9577-nnm25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e290831e-e710-4fc6-9170-9661176ac06f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d19e592c4f7ec5a3e5ebe3d5a7146ad75c9185238096e24895d73cb4cecea08,PodSandboxId:cd4b91455889c946c7f753fc066b354b2cf1aa4b8cfc1a8cdcb5fdcf991f4c73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc
9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765350261578952454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qcwf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3003a3-50a1-4248-a19f-458f0c14923b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d319ed6655edf622e672cad35dae95b38725e61f86172552ed36c899e5f0b86,PodSandboxId:b9b370bea3b07a3060ee5933abf9580abf3d900bf0fdaa39320be925d5eaf5a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765350260740092879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvmnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3714c1-16f4-4178-9896-49713556e897,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:874494df45e73978d6466ad43ce8a528791399e10087dc8d328fa73f04738b76,PodSandboxId:eb5d314eec39e0b98be6f956857362ce113f57b4ee4b01a40558183165e11faa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765350260588773554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c54887e4f2545a88be76f1664442e1,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edd69d2810bce69fcfa37e6d911b6ef0ca221e589da9070b85d1fe9098e2ccf,PodSandboxId:de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765350258209516716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-179913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47cf332bf25146df9a84a50003ecfff,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10
259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0322f2d7-8b98-488a-815e-c5fa5b4ff4e4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	cb74559cc4603       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   20 seconds ago       Running             coredns                   2                   cd4b91455889c       coredns-66bc5c9577-qcwf4               kube-system
	d3f82629cddb7       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   20 seconds ago       Running             kube-proxy                2                   b9b370bea3b07       kube-proxy-rvmnw                       kube-system
	4c72d4cfa68cc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   20 seconds ago       Running             coredns                   2                   90fe1e0a09e57       coredns-66bc5c9577-nnm25               kube-system
	9d0fef2ee087e       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   23 seconds ago       Running             kube-controller-manager   3                   99b0eb3a927dd       kube-controller-manager-pause-179913   kube-system
	492bd86865d3b       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   23 seconds ago       Running             kube-apiserver            3                   fcc77b197ab7b       kube-apiserver-pause-179913            kube-system
	2efb4548b8c28       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   24 seconds ago       Running             etcd                      2                   eb5d314eec39e       etcd-pause-179913                      kube-system
	93385a5ea66df       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   About a minute ago   Exited              kube-controller-manager   2                   99b0eb3a927dd       kube-controller-manager-pause-179913   kube-system
	2d6458c094802       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   About a minute ago   Running             kube-scheduler            2                   2061c7ea68bf8       kube-scheduler-pause-179913            kube-system
	0431ea1f78207       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   About a minute ago   Exited              kube-apiserver            2                   fcc77b197ab7b       kube-apiserver-pause-179913            kube-system
	f9ae0c7983bb0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   1                   90fe1e0a09e57       coredns-66bc5c9577-nnm25               kube-system
	7d19e592c4f7e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   1                   cd4b91455889c       coredns-66bc5c9577-qcwf4               kube-system
	8d319ed6655ed       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   About a minute ago   Exited              kube-proxy                1                   b9b370bea3b07       kube-proxy-rvmnw                       kube-system
	874494df45e73       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Exited              etcd                      1                   eb5d314eec39e       etcd-pause-179913                      kube-system
	1edd69d2810bc       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   About a minute ago   Exited              kube-scheduler            1                   de62457ace834       kube-scheduler-pause-179913            kube-system
	
	
	==> coredns [4c72d4cfa68cc80aa133569590e8d26ba690044cc5f7448f2a20dfc957858336] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59511 - 30802 "HINFO IN 7029509918167748090.3066795933630391036. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.05938676s
	
	
	==> coredns [7d19e592c4f7ec5a3e5ebe3d5a7146ad75c9185238096e24895d73cb4cecea08] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46799 - 16884 "HINFO IN 6633913088188194778.3487392683332638165. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.464373846s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cb74559cc46038b162194361f011fcc18ddb25a581d26c9989eba05f2e2ac410] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43785 - 53564 "HINFO IN 4244757400304116886.3751084024964913518. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.452333468s
	
	
	==> coredns [f9ae0c7983bb05aad73287267b4fe1dec8f7753c19691f036ae5eb30c9a53e71] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:35110 - 44402 "HINFO IN 3304936518507635904.4878337891850190860. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.159391822s
	
	
	==> describe nodes <==
	Name:               pause-179913
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-179913
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=pause-179913
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T07_02_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 07:02:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-179913
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 07:06:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 07:05:43 +0000   Wed, 10 Dec 2025 07:02:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 07:05:43 +0000   Wed, 10 Dec 2025 07:02:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 07:05:43 +0000   Wed, 10 Dec 2025 07:02:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 07:05:43 +0000   Wed, 10 Dec 2025 07:02:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.172
	  Hostname:    pause-179913
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 63ad5a9c54c0437d9f8cf6c0f08657e6
	  System UUID:                63ad5a9c-54c0-437d-9f8c-f6c0f08657e6
	  Boot ID:                    1613e1a6-e262-4f53-82c2-9652eb7aa8b7
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-nnm25                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     3m8s
	  kube-system                 coredns-66bc5c9577-qcwf4                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     3m8s
	  kube-system                 etcd-pause-179913                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         3m13s
	  kube-system                 kube-apiserver-pause-179913             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m13s
	  kube-system                 kube-controller-manager-pause-179913    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 kube-proxy-rvmnw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  kube-system                 kube-scheduler-pause-179913             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             240Mi (8%)  340Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3m6s               kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m22s              kubelet          Node pause-179913 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m13s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m13s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m13s              kubelet          Node pause-179913 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s              kubelet          Node pause-179913 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s              kubelet          Node pause-179913 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m12s              kubelet          Node pause-179913 status is now: NodeReady
	  Normal  RegisteredNode           3m9s               node-controller  Node pause-179913 event: Registered Node pause-179913 in Controller
	  Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24s (x5 over 77s)  kubelet          Node pause-179913 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x5 over 77s)  kubelet          Node pause-179913 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x5 over 77s)  kubelet          Node pause-179913 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18s                node-controller  Node pause-179913 event: Registered Node pause-179913 in Controller
	
	
	==> dmesg <==
	[Dec10 07:01] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Dec10 07:02] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000433] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.211962] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000023] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.101205] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.112366] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.131800] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.149171] kauditd_printk_skb: 172 callbacks suppressed
	[  +0.815709] kauditd_printk_skb: 12 callbacks suppressed
	[Dec10 07:03] kauditd_printk_skb: 224 callbacks suppressed
	[Dec10 07:04] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.959488] kauditd_printk_skb: 335 callbacks suppressed
	[ +10.801140] kauditd_printk_skb: 200 callbacks suppressed
	[  +5.462135] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.805196] kauditd_printk_skb: 22 callbacks suppressed
	[ +10.719004] kauditd_printk_skb: 22 callbacks suppressed
	[Dec10 07:05] kauditd_printk_skb: 36 callbacks suppressed
	[  +3.042682] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [2efb4548b8c28445c05d98cdf3122b7a7acb919a34b0832806fc3bd6eafbd2b6] <==
	{"level":"warn","ts":"2025-12-10T07:05:41.836239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.847700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.860260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.872741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.883956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.896621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.910714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.923309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.934382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.948961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.961371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.972134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.988706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:41.998930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.016246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.034874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.043879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.054444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.070949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.101838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.106675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.125505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.140745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:05:42.191632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46374","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T07:05:54.876422Z","caller":"traceutil/trace.go:172","msg":"trace[12156952] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"187.257891ms","start":"2025-12-10T07:05:54.689150Z","end":"2025-12-10T07:05:54.876408Z","steps":["trace[12156952] 'process raft request'  (duration: 186.624007ms)"],"step_count":1}
	
	
	==> etcd [874494df45e73978d6466ad43ce8a528791399e10087dc8d328fa73f04738b76] <==
	{"level":"warn","ts":"2025-12-10T07:04:36.811150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:04:36.817438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:04:36.827807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:04:36.843333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:04:36.867831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:04:36.879668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:04:36.967271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35574","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T07:04:37.366824Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-10T07:04:37.367018Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-179913","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.172:2380"],"advertise-client-urls":["https://192.168.83.172:2379"]}
	{"level":"error","ts":"2025-12-10T07:04:37.367323Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T07:04:44.370189Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T07:04:44.370252Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T07:04:44.370274Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"dbf03bba59342af","current-leader-member-id":"dbf03bba59342af"}
	{"level":"info","ts":"2025-12-10T07:04:44.370415Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-10T07:04:44.370435Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-10T07:04:44.375088Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.172:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T07:04:44.375262Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.172:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T07:04:44.375283Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.172:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-10T07:04:44.375339Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T07:04:44.375370Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T07:04:44.375387Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T07:04:44.378326Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.172:2380"}
	{"level":"error","ts":"2025-12-10T07:04:44.378500Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.172:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T07:04:44.378615Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.172:2380"}
	{"level":"info","ts":"2025-12-10T07:04:44.378684Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-179913","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.172:2380"],"advertise-client-urls":["https://192.168.83.172:2379"]}
	
	
	==> kernel <==
	 07:06:04 up 4 min,  0 users,  load average: 0.26, 0.31, 0.14
	Linux pause-179913 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0431ea1f78207f3fe3a3d98e4cf18589c6a270226e73b99835c88ee5df60b745] <==
	W1210 07:05:22.871120       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:22.931734       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.059860       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.131889       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.182739       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.194772       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.422100       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.854119       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:23.922664       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-12-10T07:05:24.103344Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	W1210 07:05:24.617321       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:05:25.694649       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-12-10T07:05:26.110718Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-12-10T07:05:28.117978Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-12-10T07:05:28.787308Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00226da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	E1210 07:05:28.787498       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1210 07:05:28.787774       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-179913?timeout=10s" auditID="24a34f72-c7bc-4a60-8d41-d2f85e33ab4e"
	E1210 07:05:28.787866       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="67.3µs" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-179913" result=null
	W1210 07:05:28.856988       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-12-10T07:05:30.125006Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-12-10T07:05:32.133771Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	{"level":"warn","ts":"2025-12-10T07:05:34.140814Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	W1210 07:05:35.988993       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-12-10T07:05:36.147253Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ce6000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	F1210 07:05:37.656658       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [492bd86865d3bf6e3fd652b744da85dc827039833cf89b2d1c37c5c06e452d94] <==
	I1210 07:05:43.193276       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 07:05:43.193359       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 07:05:43.193655       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 07:05:43.194829       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 07:05:43.194862       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 07:05:43.195511       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1210 07:05:43.195680       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 07:05:43.212371       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 07:05:43.220343       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 07:05:43.226647       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1210 07:05:43.226718       1 policy_source.go:240] refreshing policies
	I1210 07:05:43.227160       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 07:05:43.232188       1 aggregator.go:171] initial CRD sync complete...
	I1210 07:05:43.232231       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 07:05:43.232240       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 07:05:43.232248       1 cache.go:39] Caches are synced for autoregister controller
	I1210 07:05:43.234887       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 07:05:43.254671       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 07:05:43.906828       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 07:05:45.331665       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 07:05:45.423144       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 07:05:45.484438       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 07:05:45.503231       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 07:05:46.656592       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 07:05:46.756355       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [93385a5ea66dfb22d59cb4438f95b57031a2904e9201d0c5593ddb98cc2ff788] <==
	I1210 07:04:42.201669       1 serving.go:386] Generated self-signed cert in-memory
	I1210 07:04:42.978025       1 controllermanager.go:191] "Starting" version="v1.34.3"
	I1210 07:04:42.978080       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 07:04:42.980383       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1210 07:04:42.980651       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1210 07:04:42.981018       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1210 07:04:42.981084       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 07:04:57.000638       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[-]poststarthook/bootstrap-controller failed: reason withheld\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealth
z check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [9d0fef2ee087eaf1645a535bd5e565cb7e4276e629e2a92bd310b9c565f388e7] <==
	I1210 07:05:46.549650       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 07:05:46.550604       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 07:05:46.550862       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 07:05:46.550911       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 07:05:46.551013       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 07:05:46.552641       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 07:05:46.552729       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 07:05:46.552783       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 07:05:46.553752       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 07:05:46.553845       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 07:05:46.557837       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 07:05:46.557878       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 07:05:46.557887       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 07:05:46.558879       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 07:05:46.558980       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 07:05:46.559231       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 07:05:46.559771       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 07:05:46.564076       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 07:05:46.564585       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 07:05:46.569514       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 07:05:46.571329       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 07:05:46.578277       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 07:05:46.582224       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 07:05:46.589469       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 07:05:46.590892       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8d319ed6655edf622e672cad35dae95b38725e61f86172552ed36c899e5f0b86] <==
	I1210 07:04:21.493379       1 server_linux.go:53] "Using iptables proxy"
	I1210 07:04:21.735168       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1210 07:04:21.740919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-179913&limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:04:22.854469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-179913&limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:04:25.141630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-179913&limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:04:28.761637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-179913&limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [d3f82629cddb72732962082ac5437805942f88e41649e039cb4d0c94e5fe9869] <==
	I1210 07:05:44.028069       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 07:05:44.130650       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 07:05:44.130702       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.172"]
	E1210 07:05:44.130784       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 07:05:44.227305       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1210 07:05:44.227806       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 07:05:44.228047       1 server_linux.go:132] "Using iptables Proxier"
	I1210 07:05:44.256252       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 07:05:44.256818       1 server.go:527] "Version info" version="v1.34.3"
	I1210 07:05:44.257405       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 07:05:44.262856       1 config.go:200] "Starting service config controller"
	I1210 07:05:44.273685       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 07:05:44.263974       1 config.go:106] "Starting endpoint slice config controller"
	I1210 07:05:44.273866       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 07:05:44.263990       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 07:05:44.273921       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 07:05:44.271189       1 config.go:309] "Starting node config controller"
	I1210 07:05:44.273973       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 07:05:44.273994       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 07:05:44.373882       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 07:05:44.373987       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 07:05:44.374002       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1edd69d2810bce69fcfa37e6d911b6ef0ca221e589da9070b85d1fe9098e2ccf] <==
	
	
	==> kube-scheduler [2d6458c094802a443cc98268f45c071f7ee251f8ddef7849f9d31a4d3bf71726] <==
	E1210 07:05:40.137680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.83.172:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 07:05:40.159830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.83.172:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 07:05:40.160178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.83.172:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 07:05:40.211034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.83.172:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 07:05:40.236710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.83.172:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 07:05:40.308884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.83.172:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.83.172:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 07:05:43.088621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 07:05:43.090812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 07:05:43.090925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:05:43.091002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 07:05:43.091085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 07:05:43.091209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 07:05:43.091292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 07:05:43.091383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 07:05:43.091473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 07:05:43.091618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 07:05:43.091718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 07:05:43.091800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 07:05:43.091880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 07:05:43.091937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 07:05:43.091989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 07:05:43.096001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 07:05:43.160984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 07:05:43.164460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1210 07:05:48.938893       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.059315    5213 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-179913\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.111752    5213 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-179913\" is forbidden: User \"system:node:pause-179913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" podUID="3720f22a5a575d6f6cca10f29e0903b1" pod="kube-system/kube-apiserver-pause-179913"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.117724    5213 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-179913\" is forbidden: User \"system:node:pause-179913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" podUID="938741addc56bf029154c923341405b9" pod="kube-system/kube-controller-manager-pause-179913"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.118730    5213 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-nnm25\" is forbidden: User \"system:node:pause-179913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" podUID="e290831e-e710-4fc6-9170-9661176ac06f" pod="kube-system/coredns-66bc5c9577-nnm25"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.119619    5213 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-rvmnw\" is forbidden: User \"system:node:pause-179913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" podUID="9d3714c1-16f4-4178-9896-49713556e897" pod="kube-system/kube-proxy-rvmnw"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.121193    5213 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-qcwf4\" is forbidden: User \"system:node:pause-179913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" podUID="3c3003a3-50a1-4248-a19f-458f0c14923b" pod="kube-system/coredns-66bc5c9577-qcwf4"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.130287    5213 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-179913\" is forbidden: User \"system:node:pause-179913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-179913' and this object" podUID="d5c54887e4f2545a88be76f1664442e1" pod="kube-system/etcd-pause-179913"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: E1210 07:05:43.156632    5213 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 10 07:05:43 pause-179913 kubelet[5213]:         pods "kube-controller-manager-pause-179913" is forbidden: User "system:node:pause-179913" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-179913' and this object
	Dec 10 07:05:43 pause-179913 kubelet[5213]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found]
	Dec 10 07:05:43 pause-179913 kubelet[5213]:  > podUID="938741addc56bf029154c923341405b9" pod="kube-system/kube-controller-manager-pause-179913"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.295144    5213 kubelet_node_status.go:124] "Node was previously registered" node="pause-179913"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.297734    5213 kubelet_node_status.go:78] "Successfully registered node" node="pause-179913"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.297829    5213 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.301489    5213 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.545619    5213 scope.go:117] "RemoveContainer" containerID="f9ae0c7983bb05aad73287267b4fe1dec8f7753c19691f036ae5eb30c9a53e71"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.546057    5213 scope.go:117] "RemoveContainer" containerID="8d319ed6655edf622e672cad35dae95b38725e61f86172552ed36c899e5f0b86"
	Dec 10 07:05:43 pause-179913 kubelet[5213]: I1210 07:05:43.548677    5213 scope.go:117] "RemoveContainer" containerID="7d19e592c4f7ec5a3e5ebe3d5a7146ad75c9185238096e24895d73cb4cecea08"
	Dec 10 07:05:47 pause-179913 kubelet[5213]: E1210 07:05:47.558818    5213 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod938741addc56bf029154c923341405b9/crio-53fb4db082b1db50deb3198bc01f9fb42592dbd03fe186bf9f389717bdfb34eb: Error finding container 53fb4db082b1db50deb3198bc01f9fb42592dbd03fe186bf9f389717bdfb34eb: Status 404 returned error can't find the container with id 53fb4db082b1db50deb3198bc01f9fb42592dbd03fe186bf9f389717bdfb34eb
	Dec 10 07:05:47 pause-179913 kubelet[5213]: E1210 07:05:47.559965    5213 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod3720f22a5a575d6f6cca10f29e0903b1/crio-4d9848fb765d56839bda736bb41de9e5390520aa893855d7c2433b1ebbe85346: Error finding container 4d9848fb765d56839bda736bb41de9e5390520aa893855d7c2433b1ebbe85346: Status 404 returned error can't find the container with id 4d9848fb765d56839bda736bb41de9e5390520aa893855d7c2433b1ebbe85346
	Dec 10 07:05:47 pause-179913 kubelet[5213]: E1210 07:05:47.560606    5213 manager.go:1116] Failed to create existing container: /kubepods/burstable/podb47cf332bf25146df9a84a50003ecfff/crio-de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305: Error finding container de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305: Status 404 returned error can't find the container with id de62457ace834715bb345bb78324b37b95bfee94e268292a0c628394a0947305
	Dec 10 07:05:47 pause-179913 kubelet[5213]: E1210 07:05:47.601688    5213 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765350347601261866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:94590}  inodes_used:{value:43}}"
	Dec 10 07:05:47 pause-179913 kubelet[5213]: E1210 07:05:47.601731    5213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765350347601261866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:94590}  inodes_used:{value:43}}"
	Dec 10 07:05:57 pause-179913 kubelet[5213]: E1210 07:05:57.604306    5213 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765350357603648681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:94590}  inodes_used:{value:43}}"
	Dec 10 07:05:57 pause-179913 kubelet[5213]: E1210 07:05:57.605126    5213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765350357603648681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:94590}  inodes_used:{value:43}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-179913 -n pause-179913
helpers_test.go:270: (dbg) Run:  kubectl --context pause-179913 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (146.83s)

                                                
                                    

Test pass (364/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.42
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.3/json-events 3.12
14 TestDownloadOnly/v1.34.3/cached-images 0.46
15 TestDownloadOnly/v1.34.3/binaries 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.08
18 TestDownloadOnly/v1.34.3/DeleteAll 0.17
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-rc.1/json-events 3.24
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.17
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.96
31 TestOffline 147.54
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 427.82
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 9.56
45 TestAddons/parallel/RegistryCreds 0.67
47 TestAddons/parallel/InspektorGadget 11.71
48 TestAddons/parallel/MetricsServer 5.8
50 TestAddons/parallel/CSI 74.35
51 TestAddons/parallel/Headlamp 34.76
52 TestAddons/parallel/CloudSpanner 5.6
54 TestAddons/parallel/NvidiaDevicePlugin 6.57
55 TestAddons/parallel/Yakd 11.84
57 TestAddons/StoppedEnableDisable 77.97
58 TestCertOptions 96.63
59 TestCertExpiration 281.35
61 TestForceSystemdFlag 76.46
62 TestForceSystemdEnv 84.71
67 TestErrorSpam/setup 53.12
68 TestErrorSpam/start 0.37
69 TestErrorSpam/status 0.7
70 TestErrorSpam/pause 1.6
71 TestErrorSpam/unpause 1.86
72 TestErrorSpam/stop 75.09
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 97.49
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 55.77
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.15
84 TestFunctional/serial/CacheCmd/cache/add_local 1.14
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 38.21
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.47
95 TestFunctional/serial/LogsFileCmd 1.45
96 TestFunctional/serial/InvalidService 4.57
98 TestFunctional/parallel/ConfigCmd 0.51
100 TestFunctional/parallel/DryRun 0.24
101 TestFunctional/parallel/InternationalLanguage 0.14
102 TestFunctional/parallel/StatusCmd 0.77
107 TestFunctional/parallel/AddonsCmd 0.18
108 TestFunctional/parallel/PersistentVolumeClaim 33.16
110 TestFunctional/parallel/SSHCmd 0.41
111 TestFunctional/parallel/CpCmd 1.4
112 TestFunctional/parallel/MySQL 34.37
113 TestFunctional/parallel/FileSync 0.22
114 TestFunctional/parallel/CertSync 1.37
118 TestFunctional/parallel/NodeLabels 0.09
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
122 TestFunctional/parallel/License 0.24
123 TestFunctional/parallel/Version/short 0.06
124 TestFunctional/parallel/Version/components 0.47
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
129 TestFunctional/parallel/ImageCommands/ImageBuild 3.03
130 TestFunctional/parallel/ImageCommands/Setup 0.49
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.04
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.13
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.63
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 11.91
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.64
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.45
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
152 TestFunctional/parallel/MountCmd/any-port 23
153 TestFunctional/parallel/ProfileCmd/profile_list 0.34
154 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
155 TestFunctional/parallel/MountCmd/specific-port 1.15
156 TestFunctional/parallel/MountCmd/VerifyCleanup 1.06
157 TestFunctional/parallel/ServiceCmd/List 1.22
158 TestFunctional/parallel/ServiceCmd/JSONOutput 1.23
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 81.59
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 50.13
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.11
176 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.14
177 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.1
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.07
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.19
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.58
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.14
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.14
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 37.7
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.43
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.41
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.66
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.47
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.23
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 0.71
200 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.18
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 65.01
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.33
204 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.33
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 61.06
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.22
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.27
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.07
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.33
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.23
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.08
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.08
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.08
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.45
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.19
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.19
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.2
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.19
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 2.8
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.16
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.29
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.84
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.01
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.52
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.53
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.79
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.63
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.33
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.33
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.33
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 46.87
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.4
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.42
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 1.21
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 1.2
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 213.34
262 TestMultiControlPlane/serial/DeployApp 6.35
263 TestMultiControlPlane/serial/PingHostFromPods 1.4
264 TestMultiControlPlane/serial/AddWorkerNode 42.89
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.74
267 TestMultiControlPlane/serial/CopyFile 11.17
268 TestMultiControlPlane/serial/StopSecondaryNode 90.98
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
270 TestMultiControlPlane/serial/RestartSecondaryNode 37.83
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.96
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 377.73
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.21
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
275 TestMultiControlPlane/serial/StopCluster 251.53
276 TestMultiControlPlane/serial/RestartCluster 121.9
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
278 TestMultiControlPlane/serial/AddSecondaryNode 82.99
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.75
284 TestJSONOutput/start/Command 92.07
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.77
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.65
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 7.02
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.24
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 110.19
316 TestMountStart/serial/StartWithMountFirst 20.31
317 TestMountStart/serial/VerifyMountFirst 0.31
318 TestMountStart/serial/StartWithMountSecond 20.15
319 TestMountStart/serial/VerifyMountSecond 0.31
320 TestMountStart/serial/DeleteFirst 0.69
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.34
323 TestMountStart/serial/RestartStopped 21
324 TestMountStart/serial/VerifyMountPostStop 0.32
327 TestMultiNode/serial/FreshStart2Nodes 114.81
328 TestMultiNode/serial/DeployApp2Nodes 5.04
329 TestMultiNode/serial/PingHostFrom2Pods 0.9
330 TestMultiNode/serial/AddNode 43.41
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.47
333 TestMultiNode/serial/CopyFile 6.24
334 TestMultiNode/serial/StopNode 2.42
335 TestMultiNode/serial/StartAfterStop 39.02
336 TestMultiNode/serial/RestartKeepsNodes 317.22
337 TestMultiNode/serial/DeleteNode 2.65
338 TestMultiNode/serial/StopMultiNode 176.49
339 TestMultiNode/serial/RestartMultiNode 113.66
340 TestMultiNode/serial/ValidateNameConflict 55.52
345 TestPreload 163.21
347 TestScheduledStopUnix 125.03
351 TestRunningBinaryUpgrade 131.33
353 TestKubernetesUpgrade 182.54
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
357 TestStoppedBinaryUpgrade/Setup 0.57
358 TestNoKubernetes/serial/StartWithK8s 78.44
359 TestStoppedBinaryUpgrade/Upgrade 139.26
360 TestNoKubernetes/serial/StartWithStopK8s 31.56
361 TestNoKubernetes/serial/Start 34.84
362 TestStoppedBinaryUpgrade/MinikubeLogs 1.26
363 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
364 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
365 TestNoKubernetes/serial/ProfileList 1.07
366 TestNoKubernetes/serial/Stop 1.36
367 TestNoKubernetes/serial/StartNoArgs 63.42
376 TestPause/serial/Start 124.72
377 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
385 TestNetworkPlugins/group/false 4.2
389 TestISOImage/Setup 35.73
391 TestISOImage/Binaries/crictl 0.22
392 TestISOImage/Binaries/curl 0.18
393 TestISOImage/Binaries/docker 0.19
394 TestISOImage/Binaries/git 0.3
395 TestISOImage/Binaries/iptables 0.18
396 TestISOImage/Binaries/podman 0.18
397 TestISOImage/Binaries/rsync 0.21
398 TestISOImage/Binaries/socat 0.19
399 TestISOImage/Binaries/wget 0.19
400 TestISOImage/Binaries/VBoxControl 0.19
401 TestISOImage/Binaries/VBoxService 0.21
404 TestStartStop/group/old-k8s-version/serial/FirstStart 96.66
406 TestStartStop/group/no-preload/serial/FirstStart 93.98
408 TestStartStop/group/embed-certs/serial/FirstStart 100.66
409 TestStartStop/group/old-k8s-version/serial/DeployApp 10.36
410 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.28
411 TestStartStop/group/old-k8s-version/serial/Stop 82.57
412 TestStartStop/group/no-preload/serial/DeployApp 11.34
413 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
414 TestStartStop/group/no-preload/serial/Stop 87.83
415 TestStartStop/group/embed-certs/serial/DeployApp 9.32
416 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
417 TestStartStop/group/embed-certs/serial/Stop 71.96
418 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
419 TestStartStop/group/old-k8s-version/serial/SecondStart 48.71
421 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 111.97
422 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
423 TestStartStop/group/no-preload/serial/SecondStart 66.36
424 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 16.01
425 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
426 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
427 TestStartStop/group/embed-certs/serial/SecondStart 67.05
428 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
429 TestStartStop/group/old-k8s-version/serial/Pause 3.4
431 TestStartStop/group/newest-cni/serial/FirstStart 60.85
432 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
433 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
434 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.75
435 TestStartStop/group/no-preload/serial/Pause 3.05
436 TestNetworkPlugins/group/auto/Start 99.55
437 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.45
438 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.39
439 TestStartStop/group/default-k8s-diff-port/serial/Stop 71.71
440 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
441 TestStartStop/group/newest-cni/serial/DeployApp 0
442 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.49
443 TestStartStop/group/newest-cni/serial/Stop 8.94
444 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
445 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
446 TestStartStop/group/newest-cni/serial/SecondStart 38.14
447 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.74
448 TestStartStop/group/embed-certs/serial/Pause 2.91
449 TestNetworkPlugins/group/kindnet/Start 89.81
450 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
452 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.8
453 TestStartStop/group/newest-cni/serial/Pause 4.09
454 TestNetworkPlugins/group/calico/Start 92.19
455 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
456 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 75.22
457 TestNetworkPlugins/group/auto/KubeletFlags 0.22
458 TestNetworkPlugins/group/auto/NetCatPod 12.36
459 TestNetworkPlugins/group/auto/DNS 0.23
460 TestNetworkPlugins/group/auto/Localhost 0.19
461 TestNetworkPlugins/group/auto/HairPin 0.21
462 TestNetworkPlugins/group/custom-flannel/Start 92.81
463 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
464 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
465 TestNetworkPlugins/group/kindnet/NetCatPod 12.33
466 TestNetworkPlugins/group/kindnet/DNS 0.21
467 TestNetworkPlugins/group/kindnet/Localhost 0.16
468 TestNetworkPlugins/group/kindnet/HairPin 0.17
469 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 15.01
470 TestNetworkPlugins/group/enable-default-cni/Start 100.57
471 TestNetworkPlugins/group/calico/ControllerPod 6.01
472 TestNetworkPlugins/group/calico/KubeletFlags 0.23
473 TestNetworkPlugins/group/calico/NetCatPod 14.11
474 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
475 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.75
476 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.16
477 TestNetworkPlugins/group/calico/DNS 0.24
478 TestNetworkPlugins/group/calico/Localhost 0.2
479 TestNetworkPlugins/group/calico/HairPin 0.2
480 TestNetworkPlugins/group/flannel/Start 87.02
481 TestNetworkPlugins/group/bridge/Start 103.72
482 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
483 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.3
484 TestNetworkPlugins/group/custom-flannel/DNS 0.21
485 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
486 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
488 TestISOImage/PersistentMounts//data 0.18
489 TestISOImage/PersistentMounts//var/lib/docker 0.26
490 TestISOImage/PersistentMounts//var/lib/cni 0.2
491 TestISOImage/PersistentMounts//var/lib/kubelet 0.19
492 TestISOImage/PersistentMounts//var/lib/minikube 0.19
493 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
494 TestISOImage/PersistentMounts//var/lib/boot2docker 0.19
495 TestISOImage/VersionJSON 0.19
496 TestISOImage/eBPFSupport 0.18
497 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
498 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
499 TestNetworkPlugins/group/flannel/ControllerPod 6.01
500 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
501 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
502 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
503 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
504 TestNetworkPlugins/group/flannel/NetCatPod 10.24
505 TestNetworkPlugins/group/flannel/DNS 0.17
506 TestNetworkPlugins/group/flannel/Localhost 0.16
507 TestNetworkPlugins/group/flannel/HairPin 0.18
508 TestNetworkPlugins/group/bridge/KubeletFlags 0.18
509 TestNetworkPlugins/group/bridge/NetCatPod 11.24
510 TestNetworkPlugins/group/bridge/DNS 0.15
511 TestNetworkPlugins/group/bridge/Localhost 0.13
512 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (6.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-140393 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-140393 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.418761797s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1210 05:28:42.408613  247366 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1210 05:28:42.408709  247366 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-140393
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-140393: exit status 85 (81.142894ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-140393 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-140393 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:28:36
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:28:36.047205  247378 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:28:36.047489  247378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:36.047501  247378 out.go:374] Setting ErrFile to fd 2...
	I1210 05:28:36.047506  247378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:36.047763  247378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	W1210 05:28:36.047948  247378 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22094-243461/.minikube/config/config.json: open /home/jenkins/minikube-integration/22094-243461/.minikube/config/config.json: no such file or directory
	I1210 05:28:36.048518  247378 out.go:368] Setting JSON to true
	I1210 05:28:36.049552  247378 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25863,"bootTime":1765318653,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:28:36.049621  247378 start.go:143] virtualization: kvm guest
	I1210 05:28:36.055378  247378 out.go:99] [download-only-140393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1210 05:28:36.055645  247378 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22094-243461/.minikube/cache/preloaded-tarball: no such file or directory
	I1210 05:28:36.055672  247378 notify.go:221] Checking for updates...
	I1210 05:28:36.056807  247378 out.go:171] MINIKUBE_LOCATION=22094
	I1210 05:28:36.058219  247378 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:28:36.059490  247378 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 05:28:36.060621  247378 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:28:36.065657  247378 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 05:28:36.068372  247378 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:28:36.068677  247378 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:28:36.104700  247378 out.go:99] Using the kvm2 driver based on user configuration
	I1210 05:28:36.104743  247378 start.go:309] selected driver: kvm2
	I1210 05:28:36.104751  247378 start.go:927] validating driver "kvm2" against <nil>
	I1210 05:28:36.105172  247378 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:28:36.105701  247378 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1210 05:28:36.105865  247378 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:28:36.105907  247378 cni.go:84] Creating CNI manager for ""
	I1210 05:28:36.105977  247378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 05:28:36.105988  247378 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 05:28:36.106043  247378 start.go:353] cluster config:
	{Name:download-only-140393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-140393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:28:36.106269  247378 iso.go:125] acquiring lock: {Name:mkd598cf63ca899d26ff5ae5308f8a58215a80b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:28:36.108119  247378 out.go:99] Downloading VM boot image ...
	I1210 05:28:36.108185  247378 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1210 05:28:39.001201  247378 out.go:99] Starting "download-only-140393" primary control-plane node in "download-only-140393" cluster
	I1210 05:28:39.001249  247378 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 05:28:39.025613  247378 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1210 05:28:39.025653  247378 cache.go:65] Caching tarball of preloaded images
	I1210 05:28:39.025954  247378 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 05:28:39.027782  247378 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1210 05:28:39.027847  247378 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1210 05:28:39.047330  247378 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1210 05:28:39.047460  247378 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22094-243461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-140393 host does not exist
	  To start a cluster, run: "minikube start -p download-only-140393"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-140393
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (3.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-829998 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-829998 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.117103257s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (3.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
I1210 05:28:45.991312  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
I1210 05:28:46.142114  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
I1210 05:28:46.306954  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
--- PASS: TestDownloadOnly/v1.34.3/cached-images (0.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
--- PASS: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-829998
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-829998: exit status 85 (77.046794ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-140393 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-140393 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-140393                                                                                                                                                 │ download-only-140393 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ start   │ -o=json --download-only -p download-only-829998 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-829998 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:28:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:28:42.870111  247557 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:28:42.870234  247557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:42.870240  247557 out.go:374] Setting ErrFile to fd 2...
	I1210 05:28:42.870244  247557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:42.870446  247557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 05:28:42.870933  247557 out.go:368] Setting JSON to true
	I1210 05:28:42.871813  247557 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25870,"bootTime":1765318653,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:28:42.871892  247557 start.go:143] virtualization: kvm guest
	I1210 05:28:42.873807  247557 out.go:99] [download-only-829998] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:28:42.874026  247557 notify.go:221] Checking for updates...
	I1210 05:28:42.875525  247557 out.go:171] MINIKUBE_LOCATION=22094
	I1210 05:28:42.877003  247557 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:28:42.878577  247557 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 05:28:42.882558  247557 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:28:42.884069  247557 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-829998 host does not exist
	  To start a cluster, run: "minikube start -p download-only-829998"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-829998
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (3.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-160810 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-160810 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.23605745s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (3.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1210 05:28:50.086664  247366 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1210 05:28:50.086720  247366 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-160810
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-160810: exit status 85 (78.731765ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-140393 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio      │ download-only-140393 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-140393                                                                                                                                                      │ download-only-140393 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ start   │ -o=json --download-only -p download-only-829998 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio      │ download-only-829998 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ delete  │ -p download-only-829998                                                                                                                                                      │ download-only-829998 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │ 10 Dec 25 05:28 UTC │
	│ start   │ -o=json --download-only -p download-only-160810 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-160810 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:28:46
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:28:46.906775  247791 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:28:46.907104  247791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:46.907118  247791 out.go:374] Setting ErrFile to fd 2...
	I1210 05:28:46.907121  247791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:46.907397  247791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 05:28:46.907962  247791 out.go:368] Setting JSON to true
	I1210 05:28:46.908895  247791 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25874,"bootTime":1765318653,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:28:46.908960  247791 start.go:143] virtualization: kvm guest
	I1210 05:28:46.910947  247791 out.go:99] [download-only-160810] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:28:46.911361  247791 notify.go:221] Checking for updates...
	I1210 05:28:46.913305  247791 out.go:171] MINIKUBE_LOCATION=22094
	I1210 05:28:46.915017  247791 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:28:46.916532  247791 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 05:28:46.921572  247791 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:28:46.923019  247791 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-160810 host does not exist
	  To start a cluster, run: "minikube start -p download-only-160810"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-160810
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.96s)

                                                
                                                
=== RUN   TestBinaryMirror
I1210 05:28:50.944755  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-177372 --alsologtostderr --binary-mirror http://127.0.0.1:39073 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-177372" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-177372
--- PASS: TestBinaryMirror (0.96s)

                                                
                                    
x
+
TestOffline (147.54s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-045672 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-045672 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (2m26.451707991s)
helpers_test.go:176: Cleaning up "offline-crio-045672" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-045672
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-045672: (1.091808106s)
--- PASS: TestOffline (147.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-819501
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-819501: exit status 85 (69.892598ms)

                                                
                                                
-- stdout --
	* Profile "addons-819501" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-819501"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-819501
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-819501: exit status 85 (69.311765ms)

                                                
                                                
-- stdout --
	* Profile "addons-819501" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-819501"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (427.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-819501 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-819501 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (7m7.820864429s)
--- PASS: TestAddons/Setup (427.82s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-819501 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-819501 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-819501 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-819501 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [479ad0f6-afd3-427d-9618-0e77a36d2f86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [479ad0f6-afd3-427d-9618-0e77a36d2f86] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004591437s
addons_test.go:696: (dbg) Run:  kubectl --context addons-819501 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-819501 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-819501 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 5.115607ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-819501
addons_test.go:334: (dbg) Run:  kubectl --context addons-819501 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-dcb8k" [0b1deb0a-6311-4750-ab19-b48f0e6eab40] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005052894s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-819501 addons disable inspektor-gadget --alsologtostderr -v=1: (5.703237652s)
--- PASS: TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 11.049184ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-bqdmn" [6439b312-6541-4ed0-94d7-900f65d427bd] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004793973s
addons_test.go:465: (dbg) Run:  kubectl --context addons-819501 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (74.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1210 05:36:30.539699  247366 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1210 05:36:30.544030  247366 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1210 05:36:30.544055  247366 kapi.go:107] duration metric: took 4.381855ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 4.392021ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-819501 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-819501 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [39951177-9717-4542-81d4-38e3ad026fd7] Pending
helpers_test.go:353: "task-pv-pod" [39951177-9717-4542-81d4-38e3ad026fd7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [39951177-9717-4542-81d4-38e3ad026fd7] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 46.004054778s
addons_test.go:574: (dbg) Run:  kubectl --context addons-819501 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-819501 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-819501 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-819501 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-819501 delete pod task-pv-pod: (1.071214598s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-819501 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-819501 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-819501 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-819501 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [2fc1d29b-a40e-4a4d-a9cd-86704e374948] Pending
helpers_test.go:353: "task-pv-pod-restore" [2fc1d29b-a40e-4a4d-a9cd-86704e374948] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [2fc1d29b-a40e-4a4d-a9cd-86704e374948] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003935453s
addons_test.go:616: (dbg) Run:  kubectl --context addons-819501 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-819501 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-819501 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-819501 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.147338072s)
--- PASS: TestAddons/parallel/CSI (74.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (34.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-819501 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-hgq9f" [bfd845fe-286f-4d59-a2b9-24ddfc5d6499] Pending
helpers_test.go:353: "headlamp-dfcdc64b-hgq9f" [bfd845fe-286f-4d59-a2b9-24ddfc5d6499] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-hgq9f" [bfd845fe-286f-4d59-a2b9-24ddfc5d6499] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 28.005590131s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-819501 addons disable headlamp --alsologtostderr -v=1: (5.826963229s)
--- PASS: TestAddons/parallel/Headlamp (34.76s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-rnmz2" [6bdb418f-cf2d-4774-96c1-9ec565e1f323] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004263949s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-dztkj" [1272b6dc-2104-4d64-9673-e03010d430b4] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.037201166s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-7csqc" [fe15bae8-0f68-40c4-ab80-6a89342de53a] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006015558s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-819501 addons disable yakd --alsologtostderr -v=1: (5.837538947s)
--- PASS: TestAddons/parallel/Yakd (11.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (77.97s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-819501
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-819501: (1m17.757680439s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-819501
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-819501
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-819501
--- PASS: TestAddons/StoppedEnableDisable (77.97s)

                                                
                                    
x
+
TestCertOptions (96.63s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-977501 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1210 07:04:19.786901  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-977501 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m35.074330288s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-977501 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-977501 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-977501 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-977501" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-977501
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-977501: (1.089472673s)
--- PASS: TestCertOptions (96.63s)

                                                
                                    
x
+
TestCertExpiration (281.35s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-198346 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-198346 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (58.076409562s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-198346 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E1210 07:07:35.068008  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-198346 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (42.387397075s)
helpers_test.go:176: Cleaning up "cert-expiration-198346" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-198346
--- PASS: TestCertExpiration (281.35s)

                                                
                                    
x
+
TestForceSystemdFlag (76.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-302211 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-302211 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m15.33042744s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-302211 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-302211" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-302211
--- PASS: TestForceSystemdFlag (76.46s)

                                                
                                    
x
+
TestForceSystemdEnv (84.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-909953 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-909953 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m23.800052074s)
helpers_test.go:176: Cleaning up "force-systemd-env-909953" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-909953
--- PASS: TestForceSystemdEnv (84.71s)

                                                
                                    
x
+
TestErrorSpam/setup (53.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-786295 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-786295 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-786295 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-786295 --driver=kvm2  --container-runtime=crio: (53.121511083s)
--- PASS: TestErrorSpam/setup (53.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 pause
--- PASS: TestErrorSpam/pause (1.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (75.09s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 stop: (1m11.870570688s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 stop: (1.209171966s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-786295 --log_dir /tmp/nospam-786295 stop: (2.013269717s)
--- PASS: TestErrorSpam/stop (75.09s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/test/nested/copy/247366/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (97.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-399479 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1210 05:46:00.550928  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:00.557456  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:00.568964  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:00.590513  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:00.632043  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:00.713613  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:00.875268  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:01.197116  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:01.839445  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:03.121237  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:05.684322  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:10.806224  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:21.048164  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:41.529736  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:47:22.492587  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-399479 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m37.48783055s)
--- PASS: TestFunctional/serial/StartWithProxy (97.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (55.77s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1210 05:47:30.733814  247366 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-399479 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-399479 --alsologtostderr -v=8: (55.767313998s)
functional_test.go:678: soft start took 55.768269454s for "functional-399479" cluster.
I1210 05:48:26.501530  247366 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (55.77s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-399479 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-399479 cache add registry.k8s.io/pause:3.3: (1.057356966s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-399479 cache add registry.k8s.io/pause:latest: (1.10802023s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-399479 /tmp/TestFunctionalserialCacheCmdcacheadd_local3372728159/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 cache add minikube-local-cache-test:functional-399479
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 cache delete minikube-local-cache-test:functional-399479
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-399479
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399479 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (185.126565ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 kubectl -- --context functional-399479 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-399479 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.21s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-399479 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 05:48:44.414504  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-399479 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.207900311s)
functional_test.go:776: restart took 38.208066701s for "functional-399479" cluster.
I1210 05:49:11.443653  247366 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (38.21s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-399479 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-399479 logs: (1.471839492s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 logs --file /tmp/TestFunctionalserialLogsFileCmd1830468932/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-399479 logs --file /tmp/TestFunctionalserialLogsFileCmd1830468932/001/logs.txt: (1.446609325s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.57s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-399479 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-399479
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-399479: exit status 115 (265.034477ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.50.97:30309 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-399479 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-399479 delete -f testdata/invalidsvc.yaml: (1.100787606s)
--- PASS: TestFunctional/serial/InvalidService (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399479 config get cpus: exit status 14 (72.457477ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399479 config get cpus: exit status 14 (81.118327ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-399479 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-399479 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (124.368544ms)

                                                
                                                
-- stdout --
	* [functional-399479] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:49:55.945163  258858 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:49:55.945408  258858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:49:55.945417  258858 out.go:374] Setting ErrFile to fd 2...
	I1210 05:49:55.945421  258858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:49:55.945627  258858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 05:49:55.946089  258858 out.go:368] Setting JSON to false
	I1210 05:49:55.946957  258858 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27143,"bootTime":1765318653,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:49:55.947021  258858 start.go:143] virtualization: kvm guest
	I1210 05:49:55.952052  258858 out.go:179] * [functional-399479] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:49:55.953583  258858 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:49:55.953605  258858 notify.go:221] Checking for updates...
	I1210 05:49:55.956326  258858 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:49:55.957692  258858 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 05:49:55.959270  258858 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:49:55.960817  258858 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:49:55.962422  258858 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:49:55.964225  258858 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:49:55.964726  258858 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:49:55.999021  258858 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 05:49:56.000517  258858 start.go:309] selected driver: kvm2
	I1210 05:49:56.000535  258858 start.go:927] validating driver "kvm2" against &{Name:functional-399479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-399479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:49:56.000661  258858 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:49:56.002923  258858 out.go:203] 
	W1210 05:49:56.004356  258858 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 05:49:56.005790  258858 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-399479 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-399479 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-399479 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (144.55733ms)

                                                
                                                
-- stdout --
	* [functional-399479] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:49:55.823708  258842 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:49:55.823865  258842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:49:55.823901  258842 out.go:374] Setting ErrFile to fd 2...
	I1210 05:49:55.823912  258842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:49:55.824377  258842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 05:49:55.825049  258842 out.go:368] Setting JSON to false
	I1210 05:49:55.826372  258842 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27143,"bootTime":1765318653,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:49:55.826458  258842 start.go:143] virtualization: kvm guest
	I1210 05:49:55.828837  258842 out.go:179] * [functional-399479] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 05:49:55.830447  258842 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:49:55.830461  258842 notify.go:221] Checking for updates...
	I1210 05:49:55.833570  258842 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:49:55.835053  258842 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 05:49:55.836463  258842 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 05:49:55.837820  258842 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:49:55.839269  258842 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:49:55.841217  258842 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 05:49:55.841736  258842 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:49:55.874625  258842 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1210 05:49:55.876043  258842 start.go:309] selected driver: kvm2
	I1210 05:49:55.876062  258842 start.go:927] validating driver "kvm2" against &{Name:functional-399479 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-399479 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:49:55.876212  258842 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:49:55.878627  258842 out.go:203] 
	W1210 05:49:55.880077  258842 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 05:49:55.881213  258842 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [7774b15b-c3b0-40b0-8e5b-d38cffdfc273] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.101823118s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-399479 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-399479 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-399479 get pvc myclaim -o=json
I1210 05:49:27.539500  247366 retry.go:31] will retry after 1.04418564s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:9b232e83-33b0-46f4-9223-b05a88b5fefc ResourceVersion:752 Generation:0 CreationTimestamp:2025-12-10 05:49:27 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a41930 VolumeMode:0xc001a41940 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-399479 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-399479 apply -f testdata/storage-provisioner/pod.yaml
I1210 05:49:28.810019  247366 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [65db2bad-260d-4e8f-9619-372110947a16] Pending
helpers_test.go:353: "sp-pod" [65db2bad-260d-4e8f-9619-372110947a16] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [65db2bad-260d-4e8f-9619-372110947a16] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.004010294s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-399479 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-399479 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-399479 apply -f testdata/storage-provisioner/pod.yaml
I1210 05:49:46.966170  247366 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [a50b332a-ac62-404c-99a2-9880df367d9c] Pending
helpers_test.go:353: "sp-pod" [a50b332a-ac62-404c-99a2-9880df367d9c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004682402s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-399479 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh -n functional-399479 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 cp functional-399479:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2395977857/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh -n functional-399479 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh -n functional-399479 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-399479 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-vl4tc" [234c01a1-0983-4d96-98a7-4e5dc0f914d3] Pending
helpers_test.go:353: "mysql-6bcdcbc558-vl4tc" [234c01a1-0983-4d96-98a7-4e5dc0f914d3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-vl4tc" [234c01a1-0983-4d96-98a7-4e5dc0f914d3] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.007921161s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-399479 exec mysql-6bcdcbc558-vl4tc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-399479 exec mysql-6bcdcbc558-vl4tc -- mysql -ppassword -e "show databases;": exit status 1 (164.551054ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:49:42.959226  247366 retry.go:31] will retry after 1.365878959s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-399479 exec mysql-6bcdcbc558-vl4tc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-399479 exec mysql-6bcdcbc558-vl4tc -- mysql -ppassword -e "show databases;": exit status 1 (156.639844ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:49:44.483255  247366 retry.go:31] will retry after 2.072343105s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-399479 exec mysql-6bcdcbc558-vl4tc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-399479 exec mysql-6bcdcbc558-vl4tc -- mysql -ppassword -e "show databases;": exit status 1 (190.665449ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:49:46.747096  247366 retry.go:31] will retry after 1.852640967s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-399479 exec mysql-6bcdcbc558-vl4tc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-399479 exec mysql-6bcdcbc558-vl4tc -- mysql -ppassword -e "show databases;": exit status 1 (132.773865ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:49:48.733545  247366 retry.go:31] will retry after 4.972059692s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-399479 exec mysql-6bcdcbc558-vl4tc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/247366/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "sudo cat /etc/test/nested/copy/247366/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/247366.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "sudo cat /etc/ssl/certs/247366.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/247366.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "sudo cat /usr/share/ca-certificates/247366.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2473662.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "sudo cat /etc/ssl/certs/2473662.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2473662.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "sudo cat /usr/share/ca-certificates/2473662.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-399479 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399479 ssh "sudo systemctl is-active docker": exit status 1 (214.449781ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399479 ssh "sudo systemctl is-active containerd": exit status 1 (217.08729ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-399479 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-399479
localhost/kicbase/echo-server:functional-399479
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-399479 image ls --format short --alsologtostderr:
I1210 05:50:20.397527  259247 out.go:360] Setting OutFile to fd 1 ...
I1210 05:50:20.397822  259247 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:50:20.397833  259247 out.go:374] Setting ErrFile to fd 2...
I1210 05:50:20.397842  259247 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:50:20.398036  259247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
I1210 05:50:20.398585  259247 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:50:20.398679  259247 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:50:20.400615  259247 ssh_runner.go:195] Run: systemctl --version
I1210 05:50:20.402686  259247 main.go:143] libmachine: domain functional-399479 has defined MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:50:20.403082  259247 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:4b:15", ip: ""} in network mk-functional-399479: {Iface:virbr2 ExpiryTime:2025-12-10 06:46:09 +0000 UTC Type:0 Mac:52:54:00:6b:4b:15 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:functional-399479 Clientid:01:52:54:00:6b:4b:15}
I1210 05:50:20.403113  259247 main.go:143] libmachine: domain functional-399479 has defined IP address 192.168.50.97 and MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:50:20.403265  259247 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399479/id_rsa Username:docker}
I1210 05:50:20.490797  259247 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-399479 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1            │ cd073f4c5f6a8 │ 740kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                │ 6e38f40d628db │ 31.5MB │
│ localhost/kicbase/echo-server           │ functional-399479 │ 9056ab77afb8e │ 4.94MB │
│ localhost/my-image                      │ functional-399479 │ b170f94470ad0 │ 1.47MB │
│ public.ecr.aws/nginx/nginx              │ alpine            │ d4918ca78576a │ 54.2MB │
│ registry.k8s.io/kube-proxy              │ v1.34.3           │ 36eef8e07bdd6 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1               │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc      │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-399479 │ 5bfb249051e35 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.5-0           │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.3           │ aa27095f56193 │ 89MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1           │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.3               │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest            │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ latest            │ beae173ccac6a │ 1.46MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4               │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.3           │ 5826b25d990d7 │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.3           │ aec12dadf56dd │ 53.9MB │
└─────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-399479 image ls --format table --alsologtostderr:
I1210 05:50:24.019982  259313 out.go:360] Setting OutFile to fd 1 ...
I1210 05:50:24.020250  259313 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:50:24.020262  259313 out.go:374] Setting ErrFile to fd 2...
I1210 05:50:24.020266  259313 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:50:24.020521  259313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
I1210 05:50:24.021105  259313 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:50:24.021202  259313 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:50:24.023569  259313 ssh_runner.go:195] Run: systemctl --version
I1210 05:50:24.026400  259313 main.go:143] libmachine: domain functional-399479 has defined MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:50:24.026976  259313 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:4b:15", ip: ""} in network mk-functional-399479: {Iface:virbr2 ExpiryTime:2025-12-10 06:46:09 +0000 UTC Type:0 Mac:52:54:00:6b:4b:15 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:functional-399479 Clientid:01:52:54:00:6b:4b:15}
I1210 05:50:24.027008  259313 main.go:143] libmachine: domain functional-399479 has defined IP address 192.168.50.97 and MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:50:24.027194  259313 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399479/id_rsa Username:docker}
I1210 05:50:24.115269  259313 ssh_runner.go:195] Run: sudo crictl images --output json
E1210 05:51:00.545335  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:28.256641  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-399479 image ls --format json --alsologtostderr:
[{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:07ea3bc8c077aa2dea58d292bdb37e38198b1de3e5a5fc7d62359906a54be721"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"73143588"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["registry.k8s.io/kube-scheduler@sha256:d0377cec3c4eba230c281923387f4be168b48824185c60fb02783df5ada3126e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"53850254"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772d
a31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63582165"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:0e5c08f69a52d288f6d181c08d0142bb74acb7cf33025
7e57f835cf60d898a31"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76100234"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6cdf015a972b346dc904e4d8ee30fcff66495a96deb56b6c1000aa064eb71fa5"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"76001424"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"3146866
1"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-399479"],"size":"4943877"},{"id":"b170f94470ad0ff4ce45185ca9fe92d8181092f2f0087e273ba52865225edbfc","repoDigests":["localhost/my-image@sha256:15196fd2a0a9714eeefff222af7571aea5bd19d5af77e12b82b28da64d22f405"],"repoTags":["localhost/my-image:functional-399479"],"size":"1468599"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"739536"},{"id":"cd4e5f61c25799a764419e8b0c81e6da2529da8a1125f337f903d80c0488ebcd","repoDigests":["docker.io/library/8b5726ba0d4752cca4dcb6627c41e9b57401ffebdc2c96d78be994654b2a70be-tmp@sha256:ef03d062d5a243c2f9611fe0389a92a225f948a058b2fdc88a2b9bba9d44f1eb"
],"repoTags":[],"size":"1466017"},{"id":"5bfb249051e35552f01a7208607dad4dd8a0903877771a9a5da2ecc8935e2da1","repoDigests":["localhost/minikube-local-cache-test@sha256:4db0f649781998b626106e139df9fc3e8226f5ce1f384368928052826245ce52"],"repoTags":["localhost/minikube-local-cache-test:functional-399479"],"size":"3330"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/ngin
x/nginx:alpine"],"size":"54242145"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:18b3c745b7314e398516d8a850fe6b88f066f41f6fbd5132705145abc7da8fea"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"89047338"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-399479 image ls --format json --alsologtostderr:
I1210 05:50:23.820578  259302 out.go:360] Setting OutFile to fd 1 ...
I1210 05:50:23.820829  259302 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:50:23.820840  259302 out.go:374] Setting ErrFile to fd 2...
I1210 05:50:23.820844  259302 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:50:23.821082  259302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
I1210 05:50:23.821667  259302 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:50:23.821761  259302 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:50:23.824382  259302 ssh_runner.go:195] Run: systemctl --version
I1210 05:50:23.827311  259302 main.go:143] libmachine: domain functional-399479 has defined MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:50:23.828033  259302 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:4b:15", ip: ""} in network mk-functional-399479: {Iface:virbr2 ExpiryTime:2025-12-10 06:46:09 +0000 UTC Type:0 Mac:52:54:00:6b:4b:15 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:functional-399479 Clientid:01:52:54:00:6b:4b:15}
I1210 05:50:23.828070  259302 main.go:143] libmachine: domain functional-399479 has defined IP address 192.168.50.97 and MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:50:23.828288  259302 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399479/id_rsa Username:docker}
I1210 05:50:23.914579  259302 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-399479 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31468661"
- id: 5bfb249051e35552f01a7208607dad4dd8a0903877771a9a5da2ecc8935e2da1
repoDigests:
- localhost/minikube-local-cache-test@sha256:4db0f649781998b626106e139df9fc3e8226f5ce1f384368928052826245ce52
repoTags:
- localhost/minikube-local-cache-test:functional-399479
size: "3330"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:18b3c745b7314e398516d8a850fe6b88f066f41f6fbd5132705145abc7da8fea
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "89047338"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-399479
size: "4943877"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:0e5c08f69a52d288f6d181c08d0142bb74acb7cf330257e57f835cf60d898a31
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76100234"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63582165"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:07ea3bc8c077aa2dea58d292bdb37e38198b1de3e5a5fc7d62359906a54be721
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "73143588"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:d0377cec3c4eba230c281923387f4be168b48824185c60fb02783df5ada3126e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "53850254"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54242145"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b
repoTags:
- registry.k8s.io/pause:3.10.1
size: "739536"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6cdf015a972b346dc904e4d8ee30fcff66495a96deb56b6c1000aa064eb71fa5
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "76001424"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-399479 image ls --format yaml --alsologtostderr:
I1210 05:50:20.593100  259258 out.go:360] Setting OutFile to fd 1 ...
I1210 05:50:20.593217  259258 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:50:20.593222  259258 out.go:374] Setting ErrFile to fd 2...
I1210 05:50:20.593227  259258 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:50:20.593442  259258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
I1210 05:50:20.594052  259258 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:50:20.594143  259258 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:50:20.596543  259258 ssh_runner.go:195] Run: systemctl --version
I1210 05:50:20.599531  259258 main.go:143] libmachine: domain functional-399479 has defined MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:50:20.600002  259258 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:4b:15", ip: ""} in network mk-functional-399479: {Iface:virbr2 ExpiryTime:2025-12-10 06:46:09 +0000 UTC Type:0 Mac:52:54:00:6b:4b:15 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:functional-399479 Clientid:01:52:54:00:6b:4b:15}
I1210 05:50:20.600030  259258 main.go:143] libmachine: domain functional-399479 has defined IP address 192.168.50.97 and MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:50:20.600197  259258 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399479/id_rsa Username:docker}
I1210 05:50:20.687800  259258 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399479 ssh pgrep buildkitd: exit status 1 (168.387612ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image build -t localhost/my-image:functional-399479 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-399479 image build -t localhost/my-image:functional-399479 testdata/build --alsologtostderr: (2.65029669s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-399479 image build -t localhost/my-image:functional-399479 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cd4e5f61c25
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-399479
--> b170f94470a
Successfully tagged localhost/my-image:functional-399479
b170f94470ad0ff4ce45185ca9fe92d8181092f2f0087e273ba52865225edbfc
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-399479 image build -t localhost/my-image:functional-399479 testdata/build --alsologtostderr:
I1210 05:50:20.956549  259280 out.go:360] Setting OutFile to fd 1 ...
I1210 05:50:20.956849  259280 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:50:20.956861  259280 out.go:374] Setting ErrFile to fd 2...
I1210 05:50:20.956866  259280 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:50:20.957158  259280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
I1210 05:50:20.957823  259280 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:50:20.958657  259280 config.go:182] Loaded profile config "functional-399479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 05:50:20.961247  259280 ssh_runner.go:195] Run: systemctl --version
I1210 05:50:20.963752  259280 main.go:143] libmachine: domain functional-399479 has defined MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:50:20.964372  259280 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6b:4b:15", ip: ""} in network mk-functional-399479: {Iface:virbr2 ExpiryTime:2025-12-10 06:46:09 +0000 UTC Type:0 Mac:52:54:00:6b:4b:15 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:functional-399479 Clientid:01:52:54:00:6b:4b:15}
I1210 05:50:20.964406  259280 main.go:143] libmachine: domain functional-399479 has defined IP address 192.168.50.97 and MAC address 52:54:00:6b:4b:15 in network mk-functional-399479
I1210 05:50:20.964593  259280 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399479/id_rsa Username:docker}
I1210 05:50:21.053589  259280 build_images.go:162] Building image from path: /tmp/build.930525264.tar
I1210 05:50:21.053691  259280 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 05:50:21.070890  259280 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.930525264.tar
I1210 05:50:21.077053  259280 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.930525264.tar: stat -c "%s %y" /var/lib/minikube/build/build.930525264.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.930525264.tar': No such file or directory
I1210 05:50:21.077093  259280 ssh_runner.go:362] scp /tmp/build.930525264.tar --> /var/lib/minikube/build/build.930525264.tar (3072 bytes)
I1210 05:50:21.113777  259280 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.930525264
I1210 05:50:21.130016  259280 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.930525264 -xf /var/lib/minikube/build/build.930525264.tar
I1210 05:50:21.144091  259280 crio.go:315] Building image: /var/lib/minikube/build/build.930525264
I1210 05:50:21.144174  259280 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-399479 /var/lib/minikube/build/build.930525264 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1210 05:50:23.508244  259280 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-399479 /var/lib/minikube/build/build.930525264 --cgroup-manager=cgroupfs: (2.364042786s)
I1210 05:50:23.508324  259280 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.930525264
I1210 05:50:23.527144  259280 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.930525264.tar
I1210 05:50:23.541798  259280 build_images.go:218] Built localhost/my-image:functional-399479 from /tmp/build.930525264.tar
I1210 05:50:23.541842  259280 build_images.go:134] succeeded building to: functional-399479
I1210 05:50:23.541846  259280 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-399479
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image load --daemon kicbase/echo-server:functional-399479 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-399479 image load --daemon kicbase/echo-server:functional-399479 --alsologtostderr: (1.614793612s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image load --daemon kicbase/echo-server:functional-399479 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-399479
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image load --daemon kicbase/echo-server:functional-399479 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (11.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image save kicbase/echo-server:functional-399479 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-399479 image save kicbase/echo-server:functional-399479 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (11.914270063s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (11.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image rm kicbase/echo-server:functional-399479 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-399479 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.124484525s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-399479
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 image save --daemon kicbase/echo-server:functional-399479 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-399479
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-399479 /tmp/TestFunctionalparallelMountCmdany-port524700410/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765345794126228958" to /tmp/TestFunctionalparallelMountCmdany-port524700410/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765345794126228958" to /tmp/TestFunctionalparallelMountCmdany-port524700410/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765345794126228958" to /tmp/TestFunctionalparallelMountCmdany-port524700410/001/test-1765345794126228958
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399479 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (179.231232ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:49:54.305811  247366 retry.go:31] will retry after 436.854404ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 05:49 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 05:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 05:49 test-1765345794126228958
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh cat /mount-9p/test-1765345794126228958
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-399479 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [927b8082-ea8c-4e49-a503-c6c8b3ff138b] Pending
helpers_test.go:353: "busybox-mount" [927b8082-ea8c-4e49-a503-c6c8b3ff138b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [927b8082-ea8c-4e49-a503-c6c8b3ff138b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [927b8082-ea8c-4e49-a503-c6c8b3ff138b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 21.004514969s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-399479 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-399479 /tmp/TestFunctionalparallelMountCmdany-port524700410/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (23.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "273.732932ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "65.120271ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "277.034889ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "71.056252ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-399479 /tmp/TestFunctionalparallelMountCmdspecific-port3496322953/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399479 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (162.215738ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:50:17.288249  247366 retry.go:31] will retry after 282.784907ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-399479 /tmp/TestFunctionalparallelMountCmdspecific-port3496322953/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399479 ssh "sudo umount -f /mount-9p": exit status 1 (167.829284ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-399479 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-399479 /tmp/TestFunctionalparallelMountCmdspecific-port3496322953/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-399479 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3391979705/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-399479 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3391979705/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-399479 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3391979705/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399479 ssh "findmnt -T" /mount1: exit status 1 (180.862728ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:50:18.458634  247366 retry.go:31] will retry after 320.754771ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-399479 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-399479 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3391979705/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-399479 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3391979705/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-399479 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3391979705/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-399479 service list: (1.218938368s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-399479 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-399479 service list -o json: (1.231383534s)
functional_test.go:1504: Took "1.231486017s" to run "out/minikube-linux-amd64 -p functional-399479 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-399479
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-399479
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-399479
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22094-243461/.minikube/files/etc/test/nested/copy/247366/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (81.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-399582 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1210 06:01:00.545538  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-399582 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m21.589645783s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (81.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (50.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1210 06:01:09.652112  247366 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-399582 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-399582 --alsologtostderr -v=8: (50.13186501s)
functional_test.go:678: soft start took 50.132354752s for "functional-399582" cluster.
I1210 06:01:59.784388  247366 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (50.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-399582 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-399582 cache add registry.k8s.io/pause:3.3: (1.079017604s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-399582 cache add registry.k8s.io/pause:latest: (1.070223289s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC846634257/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 cache add minikube-local-cache-test:functional-399582
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 cache delete minikube-local-cache-test:functional-399582
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-399582
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399582 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (184.782729ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 kubectl -- --context functional-399582 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-399582 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (37.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-399582 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 06:02:23.620364  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-399582 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.697737493s)
functional_test.go:776: restart took 37.697900885s for "functional-399582" cluster.
I1210 06:02:44.189683  247366 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (37.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-399582 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-399582 logs: (1.433862364s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi4226800121/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-399582 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi4226800121/001/logs.txt: (1.4055676s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-399582 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-399582
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-399582: exit status 115 (252.16295ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.50.120:31000 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-399582 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-399582 delete -f testdata/invalidsvc.yaml: (1.197839446s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399582 config get cpus: exit status 14 (75.487416ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399582 config get cpus: exit status 14 (71.823689ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-399582 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-399582 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (116.605499ms)

                                                
                                                
-- stdout --
	* [functional-399582] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:03:51.536719  264277 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:03:51.537002  264277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:03:51.537012  264277 out.go:374] Setting ErrFile to fd 2...
	I1210 06:03:51.537017  264277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:03:51.537210  264277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 06:03:51.537650  264277 out.go:368] Setting JSON to false
	I1210 06:03:51.538584  264277 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27979,"bootTime":1765318653,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:03:51.538648  264277 start.go:143] virtualization: kvm guest
	I1210 06:03:51.540899  264277 out.go:179] * [functional-399582] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:03:51.542367  264277 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:03:51.542392  264277 notify.go:221] Checking for updates...
	I1210 06:03:51.544816  264277 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:03:51.546536  264277 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 06:03:51.547950  264277 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 06:03:51.549360  264277 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:03:51.550660  264277 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:03:51.552416  264277 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:03:51.552951  264277 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:03:51.585309  264277 out.go:179] * Using the kvm2 driver based on existing profile
	I1210 06:03:51.586635  264277 start.go:309] selected driver: kvm2
	I1210 06:03:51.586652  264277 start.go:927] validating driver "kvm2" against &{Name:functional-399582 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-399582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.120 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:03:51.586771  264277 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:03:51.588772  264277 out.go:203] 
	W1210 06:03:51.590040  264277 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 06:03:51.591058  264277 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-399582 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-399582 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-399582 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (121.304891ms)

                                                
                                                
-- stdout --
	* [functional-399582] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:03:51.766799  264309 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:03:51.767087  264309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:03:51.767097  264309 out.go:374] Setting ErrFile to fd 2...
	I1210 06:03:51.767101  264309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:03:51.767389  264309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 06:03:51.767816  264309 out.go:368] Setting JSON to false
	I1210 06:03:51.768682  264309 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27979,"bootTime":1765318653,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:03:51.768770  264309 start.go:143] virtualization: kvm guest
	I1210 06:03:51.770686  264309 out.go:179] * [functional-399582] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 06:03:51.772002  264309 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:03:51.772030  264309 notify.go:221] Checking for updates...
	I1210 06:03:51.774318  264309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:03:51.775690  264309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 06:03:51.776963  264309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 06:03:51.778488  264309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:03:51.780172  264309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:03:51.782707  264309 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:03:51.783277  264309 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:03:51.816250  264309 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1210 06:03:51.817678  264309 start.go:309] selected driver: kvm2
	I1210 06:03:51.817698  264309 start.go:927] validating driver "kvm2" against &{Name:functional-399582 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-399582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.120 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:03:51.817811  264309 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:03:51.819986  264309 out.go:203] 
	W1210 06:03:51.821327  264309 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 06:03:51.822701  264309 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (65.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [ab242a4c-35c9-4fef-9a3a-5c1e1717225e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00531547s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-399582 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-399582 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-399582 get pvc myclaim -o=json
I1210 06:02:58.329562  247366 retry.go:31] will retry after 1.073233985s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:bcc15eaa-771a-44f6-8d6f-458b6d9feeaf ResourceVersion:820 Generation:0 CreationTimestamp:2025-12-10 06:02:58 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001fb3cc0 VolumeMode:0xc001fb3cd0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-399582 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-399582 apply -f testdata/storage-provisioner/pod.yaml
I1210 06:02:59.600851  247366 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d] Pending
helpers_test.go:353: "sp-pod" [cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [cf6e2eb7-fae4-4267-a8c9-acaee5ce9a4d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 49.005784249s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-399582 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-399582 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-399582 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [b288f573-315c-487e-972f-525794180d08] Pending
helpers_test.go:353: "sp-pod" [b288f573-315c-487e-972f-525794180d08] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [b288f573-315c-487e-972f-525794180d08] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004246519s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-399582 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (65.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh -n functional-399582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 cp functional-399582:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm1907889463/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh -n functional-399582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh -n functional-399582 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (61.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-399582 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-k8mpc" [d6d84736-d96b-4bb0-9ace-fe6ef83567c2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-k8mpc" [d6d84736-d96b-4bb0-9ace-fe6ef83567c2] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 52.004208124s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-399582 exec mysql-7d7b65bc95-k8mpc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-399582 exec mysql-7d7b65bc95-k8mpc -- mysql -ppassword -e "show databases;": exit status 1 (319.607595ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 06:03:45.004796  247366 retry.go:31] will retry after 848.646058ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-399582 exec mysql-7d7b65bc95-k8mpc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-399582 exec mysql-7d7b65bc95-k8mpc -- mysql -ppassword -e "show databases;": exit status 1 (177.073543ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 06:03:46.031052  247366 retry.go:31] will retry after 1.407717163s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-399582 exec mysql-7d7b65bc95-k8mpc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-399582 exec mysql-7d7b65bc95-k8mpc -- mysql -ppassword -e "show databases;": exit status 1 (217.019827ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 06:03:47.656328  247366 retry.go:31] will retry after 2.903108235s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-399582 exec mysql-7d7b65bc95-k8mpc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-399582 exec mysql-7d7b65bc95-k8mpc -- mysql -ppassword -e "show databases;": exit status 1 (202.441915ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 06:03:50.762512  247366 retry.go:31] will retry after 2.482495526s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-399582 exec mysql-7d7b65bc95-k8mpc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (61.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/247366/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "sudo cat /etc/test/nested/copy/247366/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/247366.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "sudo cat /etc/ssl/certs/247366.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/247366.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "sudo cat /usr/share/ca-certificates/247366.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2473662.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "sudo cat /etc/ssl/certs/2473662.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2473662.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "sudo cat /usr/share/ca-certificates/2473662.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-399582 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399582 ssh "sudo systemctl is-active docker": exit status 1 (162.444638ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399582 ssh "sudo systemctl is-active containerd": exit status 1 (167.563718ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-399582 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-399582
localhost/kicbase/echo-server:functional-399582
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-399582 image ls --format short --alsologtostderr:
I1210 06:03:56.664629  264500 out.go:360] Setting OutFile to fd 1 ...
I1210 06:03:56.664892  264500 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:03:56.664900  264500 out.go:374] Setting ErrFile to fd 2...
I1210 06:03:56.664904  264500 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:03:56.665084  264500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
I1210 06:03:56.665687  264500 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:03:56.665797  264500 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:03:56.668425  264500 ssh_runner.go:195] Run: systemctl --version
I1210 06:03:56.671055  264500 main.go:143] libmachine: domain functional-399582 has defined MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:56.671477  264500 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:fe:0f", ip: ""} in network mk-functional-399582: {Iface:virbr2 ExpiryTime:2025-12-10 07:00:04 +0000 UTC Type:0 Mac:52:54:00:f3:fe:0f Iaid: IPaddr:192.168.50.120 Prefix:24 Hostname:functional-399582 Clientid:01:52:54:00:f3:fe:0f}
I1210 06:03:56.671509  264500 main.go:143] libmachine: domain functional-399582 has defined IP address 192.168.50.120 and MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:56.671680  264500 sshutil.go:53] new ssh client: &{IP:192.168.50.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399582/id_rsa Username:docker}
I1210 06:03:56.752321  264500 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-399582 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/minikube-local-cache-test     │ functional-399582  │ 5bfb249051e35 │ 3.33kB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ d4918ca78576a │ 54.2MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1       │ af0321f3a4f38 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ localhost/kicbase/echo-server           │ functional-399582  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/etcd                    │ 3.6.6-0            │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1       │ 73f80cdc073da │ 52.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1       │ 5032a56602e1b │ 76.9MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1       │ 58865405a13bc │ 90.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-399582 image ls --format table --alsologtostderr:
I1210 06:03:58.300796  264581 out.go:360] Setting OutFile to fd 1 ...
I1210 06:03:58.301085  264581 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:03:58.301096  264581 out.go:374] Setting ErrFile to fd 2...
I1210 06:03:58.301099  264581 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:03:58.301322  264581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
I1210 06:03:58.302474  264581 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:03:58.302674  264581 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:03:58.305237  264581 ssh_runner.go:195] Run: systemctl --version
I1210 06:03:58.307255  264581 main.go:143] libmachine: domain functional-399582 has defined MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:58.307707  264581 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:fe:0f", ip: ""} in network mk-functional-399582: {Iface:virbr2 ExpiryTime:2025-12-10 07:00:04 +0000 UTC Type:0 Mac:52:54:00:f3:fe:0f Iaid: IPaddr:192.168.50.120 Prefix:24 Hostname:functional-399582 Clientid:01:52:54:00:f3:fe:0f}
I1210 06:03:58.307729  264581 main.go:143] libmachine: domain functional-399582 has defined IP address 192.168.50.120 and MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:58.307899  264581 sshutil.go:53] new ssh client: &{IP:192.168.50.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399582/id_rsa Username:docker}
I1210 06:03:58.390989  264581 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-399582 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-399582"],"size":"4943877"},{"id":"5bfb249051e35552f01a7208607dad4dd8a0903877771a9a5da2ecc8935e2da1","repoDigests":["localhost/minikube-local-cache-test@sha256:4db0f649781998b626106e139df9fc3e8226f5ce1f384368928052826245ce52"],"repoTags":["localhost/minikube-local-cache-test:functional-399582"],"size":"3330"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"54242145"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k
8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"71986585"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c800
02767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k
8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98","registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"76893010"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoD
igests":["registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636","registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"52763474"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/cor
edns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f","registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"90844140"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-399582 image ls --format json --alsologtostderr:
I1210 06:03:58.104303  264570 out.go:360] Setting OutFile to fd 1 ...
I1210 06:03:58.104429  264570 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:03:58.104438  264570 out.go:374] Setting ErrFile to fd 2...
I1210 06:03:58.104443  264570 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:03:58.104708  264570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
I1210 06:03:58.105348  264570 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:03:58.105466  264570 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:03:58.107607  264570 ssh_runner.go:195] Run: systemctl --version
I1210 06:03:58.109749  264570 main.go:143] libmachine: domain functional-399582 has defined MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:58.110230  264570 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:fe:0f", ip: ""} in network mk-functional-399582: {Iface:virbr2 ExpiryTime:2025-12-10 07:00:04 +0000 UTC Type:0 Mac:52:54:00:f3:fe:0f Iaid: IPaddr:192.168.50.120 Prefix:24 Hostname:functional-399582 Clientid:01:52:54:00:f3:fe:0f}
I1210 06:03:58.110267  264570 main.go:143] libmachine: domain functional-399582 has defined IP address 192.168.50.120 and MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:58.110408  264570 sshutil.go:53] new ssh client: &{IP:192.168.50.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399582/id_rsa Username:docker}
I1210 06:03:58.197403  264570 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-399582 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "52763474"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "90844140"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "71986585"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-399582
size: "4943877"
- id: 5bfb249051e35552f01a7208607dad4dd8a0903877771a9a5da2ecc8935e2da1
repoDigests:
- localhost/minikube-local-cache-test@sha256:4db0f649781998b626106e139df9fc3e8226f5ce1f384368928052826245ce52
repoTags:
- localhost/minikube-local-cache-test:functional-399582
size: "3330"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54242145"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
- registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "76893010"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-399582 image ls --format yaml --alsologtostderr:
I1210 06:03:56.856041  264527 out.go:360] Setting OutFile to fd 1 ...
I1210 06:03:56.856296  264527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:03:56.856305  264527 out.go:374] Setting ErrFile to fd 2...
I1210 06:03:56.856310  264527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:03:56.856496  264527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
I1210 06:03:56.857085  264527 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:03:56.857179  264527 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:03:56.859523  264527 ssh_runner.go:195] Run: systemctl --version
I1210 06:03:56.862074  264527 main.go:143] libmachine: domain functional-399582 has defined MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:56.862566  264527 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:fe:0f", ip: ""} in network mk-functional-399582: {Iface:virbr2 ExpiryTime:2025-12-10 07:00:04 +0000 UTC Type:0 Mac:52:54:00:f3:fe:0f Iaid: IPaddr:192.168.50.120 Prefix:24 Hostname:functional-399582 Clientid:01:52:54:00:f3:fe:0f}
I1210 06:03:56.862595  264527 main.go:143] libmachine: domain functional-399582 has defined IP address 192.168.50.120 and MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:56.862766  264527 sshutil.go:53] new ssh client: &{IP:192.168.50.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399582/id_rsa Username:docker}
I1210 06:03:56.943651  264527 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (2.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399582 ssh pgrep buildkitd: exit status 1 (159.950525ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image build -t localhost/my-image:functional-399582 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-399582 image build -t localhost/my-image:functional-399582 testdata/build --alsologtostderr: (2.438200269s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-399582 image build -t localhost/my-image:functional-399582 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 560af602115
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-399582
--> 61943162dc2
Successfully tagged localhost/my-image:functional-399582
61943162dc2536bf363c3d922f9d967e9e4a92d990eca40d258b294a385e4b67
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-399582 image build -t localhost/my-image:functional-399582 testdata/build --alsologtostderr:
I1210 06:03:57.211818  264549 out.go:360] Setting OutFile to fd 1 ...
I1210 06:03:57.211968  264549 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:03:57.211979  264549 out.go:374] Setting ErrFile to fd 2...
I1210 06:03:57.211984  264549 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:03:57.212171  264549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
I1210 06:03:57.212779  264549 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:03:57.213508  264549 config.go:182] Loaded profile config "functional-399582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:03:57.215772  264549 ssh_runner.go:195] Run: systemctl --version
I1210 06:03:57.218097  264549 main.go:143] libmachine: domain functional-399582 has defined MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:57.218592  264549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:fe:0f", ip: ""} in network mk-functional-399582: {Iface:virbr2 ExpiryTime:2025-12-10 07:00:04 +0000 UTC Type:0 Mac:52:54:00:f3:fe:0f Iaid: IPaddr:192.168.50.120 Prefix:24 Hostname:functional-399582 Clientid:01:52:54:00:f3:fe:0f}
I1210 06:03:57.218624  264549 main.go:143] libmachine: domain functional-399582 has defined IP address 192.168.50.120 and MAC address 52:54:00:f3:fe:0f in network mk-functional-399582
I1210 06:03:57.218798  264549 sshutil.go:53] new ssh client: &{IP:192.168.50.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/functional-399582/id_rsa Username:docker}
I1210 06:03:57.301700  264549 build_images.go:162] Building image from path: /tmp/build.3514850755.tar
I1210 06:03:57.301805  264549 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 06:03:57.315800  264549 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3514850755.tar
I1210 06:03:57.322380  264549 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3514850755.tar: stat -c "%s %y" /var/lib/minikube/build/build.3514850755.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3514850755.tar': No such file or directory
I1210 06:03:57.322424  264549 ssh_runner.go:362] scp /tmp/build.3514850755.tar --> /var/lib/minikube/build/build.3514850755.tar (3072 bytes)
I1210 06:03:57.362578  264549 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3514850755
I1210 06:03:57.376053  264549 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3514850755 -xf /var/lib/minikube/build/build.3514850755.tar
I1210 06:03:57.388871  264549 crio.go:315] Building image: /var/lib/minikube/build/build.3514850755
I1210 06:03:57.388990  264549 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-399582 /var/lib/minikube/build/build.3514850755 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1210 06:03:59.548370  264549 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-399582 /var/lib/minikube/build/build.3514850755 --cgroup-manager=cgroupfs: (2.159349717s)
I1210 06:03:59.548449  264549 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3514850755
I1210 06:03:59.565133  264549 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3514850755.tar
I1210 06:03:59.579439  264549 build_images.go:218] Built localhost/my-image:functional-399582 from /tmp/build.3514850755.tar
I1210 06:03:59.579513  264549 build_images.go:134] succeeded building to: functional-399582
I1210 06:03:59.579518  264549 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image ls
E1210 06:04:19.786835  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:19.793299  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:19.804804  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:19.826292  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:19.867803  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:19.949309  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:20.111108  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:20.432967  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:21.075179  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:22.357031  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:24.919031  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:30.040822  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:04:40.283148  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:05:00.765027  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:05:41.726723  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:06:00.545054  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:07:03.648490  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:09:19.786839  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:09:47.490714  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:00.545229  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (2.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-399582
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image load --daemon kicbase/echo-server:functional-399582 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-399582 image load --daemon kicbase/echo-server:functional-399582 --alsologtostderr: (1.097954193s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image load --daemon kicbase/echo-server:functional-399582 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-399582
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image load --daemon kicbase/echo-server:functional-399582 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image save kicbase/echo-server:functional-399582 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image rm kicbase/echo-server:functional-399582 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-399582
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 image save --daemon kicbase/echo-server:functional-399582 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-399582
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "259.70728ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "65.289978ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "256.29287ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "69.666931ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (46.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4275954490/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765346581790005656" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4275954490/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765346581790005656" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4275954490/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765346581790005656" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4275954490/001/test-1765346581790005656
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399582 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (157.318665ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:03:01.947671  247366 retry.go:31] will retry after 419.8415ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 06:03 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 06:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 06:03 test-1765346581790005656
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh cat /mount-9p/test-1765346581790005656
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-399582 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [735ed8f9-562b-402d-a721-a403dd9bf390] Pending
helpers_test.go:353: "busybox-mount" [735ed8f9-562b-402d-a721-a403dd9bf390] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [735ed8f9-562b-402d-a721-a403dd9bf390] Running
helpers_test.go:353: "busybox-mount" [735ed8f9-562b-402d-a721-a403dd9bf390] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [735ed8f9-562b-402d-a721-a403dd9bf390] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 45.004463541s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-399582 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4275954490/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (46.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3204528753/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399582 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (196.580599ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:03:48.859529  247366 retry.go:31] will retry after 482.969997ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3204528753/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "sudo umount -f /mount-9p"
I1210 06:03:49.904443  247366 detect.go:223] nested VM detected
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399582 ssh "sudo umount -f /mount-9p": exit status 1 (179.798679ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-399582 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3204528753/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1928530898/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1928530898/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1928530898/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-399582 ssh "findmnt -T" /mount1: exit status 1 (238.047651ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:03:50.304920  247366 retry.go:31] will retry after 614.475102ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-399582 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1928530898/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1928530898/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-399582 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1928530898/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-399582 service list: (1.206008736s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-399582 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-399582 service list -o json: (1.201876489s)
functional_test.go:1504: Took "1.201984334s" to run "out/minikube-linux-amd64 -p functional-399582 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-399582
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-399582
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-399582
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (213.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1210 06:14:19.786558  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:00.545297  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-387221 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m32.741123795s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (213.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-387221 kubectl -- rollout status deployment/busybox: (3.961393828s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-bd256 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-cqtsw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-v2j9n -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-bd256 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-cqtsw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-v2j9n -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-bd256 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-cqtsw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-v2j9n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-bd256 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-bd256 -- sh -c "ping -c 1 192.168.50.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-cqtsw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-cqtsw -- sh -c "ping -c 1 192.168.50.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-v2j9n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 kubectl -- exec busybox-7b57f96db7-v2j9n -- sh -c "ping -c 1 192.168.50.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (42.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 node add --alsologtostderr -v 5
E1210 06:17:51.984612  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:51.991123  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:52.002639  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:52.024238  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:52.066161  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:52.147762  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:52.309395  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:52.631177  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:53.273446  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:54.555343  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:57.117474  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:18:02.239654  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:18:12.481248  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-387221 node add --alsologtostderr -v 5: (42.181877584s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (42.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-387221 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp testdata/cp-test.txt ha-387221:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2630606192/001/cp-test_ha-387221.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221:/home/docker/cp-test.txt ha-387221-m02:/home/docker/cp-test_ha-387221_ha-387221-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m02 "sudo cat /home/docker/cp-test_ha-387221_ha-387221-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221:/home/docker/cp-test.txt ha-387221-m03:/home/docker/cp-test_ha-387221_ha-387221-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m03 "sudo cat /home/docker/cp-test_ha-387221_ha-387221-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221:/home/docker/cp-test.txt ha-387221-m04:/home/docker/cp-test_ha-387221_ha-387221-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m04 "sudo cat /home/docker/cp-test_ha-387221_ha-387221-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp testdata/cp-test.txt ha-387221-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2630606192/001/cp-test_ha-387221-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221-m02:/home/docker/cp-test.txt ha-387221:/home/docker/cp-test_ha-387221-m02_ha-387221.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221 "sudo cat /home/docker/cp-test_ha-387221-m02_ha-387221.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221-m02:/home/docker/cp-test.txt ha-387221-m03:/home/docker/cp-test_ha-387221-m02_ha-387221-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m03 "sudo cat /home/docker/cp-test_ha-387221-m02_ha-387221-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221-m02:/home/docker/cp-test.txt ha-387221-m04:/home/docker/cp-test_ha-387221-m02_ha-387221-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m04 "sudo cat /home/docker/cp-test_ha-387221-m02_ha-387221-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp testdata/cp-test.txt ha-387221-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2630606192/001/cp-test_ha-387221-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221-m03:/home/docker/cp-test.txt ha-387221:/home/docker/cp-test_ha-387221-m03_ha-387221.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221 "sudo cat /home/docker/cp-test_ha-387221-m03_ha-387221.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221-m03:/home/docker/cp-test.txt ha-387221-m02:/home/docker/cp-test_ha-387221-m03_ha-387221-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m02 "sudo cat /home/docker/cp-test_ha-387221-m03_ha-387221-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221-m03:/home/docker/cp-test.txt ha-387221-m04:/home/docker/cp-test_ha-387221-m03_ha-387221-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m04 "sudo cat /home/docker/cp-test_ha-387221-m03_ha-387221-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp testdata/cp-test.txt ha-387221-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2630606192/001/cp-test_ha-387221-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221-m04:/home/docker/cp-test.txt ha-387221:/home/docker/cp-test_ha-387221-m04_ha-387221.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221 "sudo cat /home/docker/cp-test_ha-387221-m04_ha-387221.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221-m04:/home/docker/cp-test.txt ha-387221-m02:/home/docker/cp-test_ha-387221-m04_ha-387221-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m02 "sudo cat /home/docker/cp-test_ha-387221-m04_ha-387221-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 cp ha-387221-m04:/home/docker/cp-test.txt ha-387221-m03:/home/docker/cp-test_ha-387221-m04_ha-387221-m03.txt
E1210 06:18:32.963162  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 ssh -n ha-387221-m03 "sudo cat /home/docker/cp-test_ha-387221-m04_ha-387221-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (90.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 node stop m02 --alsologtostderr -v 5
E1210 06:19:03.621808  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:19:13.926139  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:19:19.786790  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-387221 node stop m02 --alsologtostderr -v 5: (1m30.4392685s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-387221 status --alsologtostderr -v 5: exit status 7 (537.663013ms)

                                                
                                                
-- stdout --
	ha-387221
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-387221-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-387221-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-387221-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:20:03.946431  270568 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:20:03.946674  270568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:03.946682  270568 out.go:374] Setting ErrFile to fd 2...
	I1210 06:20:03.946686  270568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:03.946887  270568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 06:20:03.947089  270568 out.go:368] Setting JSON to false
	I1210 06:20:03.947117  270568 mustload.go:66] Loading cluster: ha-387221
	I1210 06:20:03.947199  270568 notify.go:221] Checking for updates...
	I1210 06:20:03.947562  270568 config.go:182] Loaded profile config "ha-387221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:20:03.947587  270568 status.go:174] checking status of ha-387221 ...
	I1210 06:20:03.949783  270568 status.go:371] ha-387221 host status = "Running" (err=<nil>)
	I1210 06:20:03.949800  270568 host.go:66] Checking if "ha-387221" exists ...
	I1210 06:20:03.952754  270568 main.go:143] libmachine: domain ha-387221 has defined MAC address 52:54:00:fe:0c:c5 in network mk-ha-387221
	I1210 06:20:03.953532  270568 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fe:0c:c5", ip: ""} in network mk-ha-387221: {Iface:virbr2 ExpiryTime:2025-12-10 07:14:13 +0000 UTC Type:0 Mac:52:54:00:fe:0c:c5 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:ha-387221 Clientid:01:52:54:00:fe:0c:c5}
	I1210 06:20:03.953570  270568 main.go:143] libmachine: domain ha-387221 has defined IP address 192.168.50.147 and MAC address 52:54:00:fe:0c:c5 in network mk-ha-387221
	I1210 06:20:03.953757  270568 host.go:66] Checking if "ha-387221" exists ...
	I1210 06:20:03.954100  270568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:20:03.957186  270568 main.go:143] libmachine: domain ha-387221 has defined MAC address 52:54:00:fe:0c:c5 in network mk-ha-387221
	I1210 06:20:03.957779  270568 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fe:0c:c5", ip: ""} in network mk-ha-387221: {Iface:virbr2 ExpiryTime:2025-12-10 07:14:13 +0000 UTC Type:0 Mac:52:54:00:fe:0c:c5 Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:ha-387221 Clientid:01:52:54:00:fe:0c:c5}
	I1210 06:20:03.957816  270568 main.go:143] libmachine: domain ha-387221 has defined IP address 192.168.50.147 and MAC address 52:54:00:fe:0c:c5 in network mk-ha-387221
	I1210 06:20:03.958034  270568 sshutil.go:53] new ssh client: &{IP:192.168.50.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/ha-387221/id_rsa Username:docker}
	I1210 06:20:04.050854  270568 ssh_runner.go:195] Run: systemctl --version
	I1210 06:20:04.060103  270568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:20:04.084208  270568 kubeconfig.go:125] found "ha-387221" server: "https://192.168.50.254:8443"
	I1210 06:20:04.084261  270568 api_server.go:166] Checking apiserver status ...
	I1210 06:20:04.084311  270568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:20:04.106955  270568 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2190/cgroup
	W1210 06:20:04.121541  270568 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2190/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:20:04.121609  270568 ssh_runner.go:195] Run: ls
	I1210 06:20:04.127177  270568 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8443/healthz ...
	I1210 06:20:04.131847  270568 api_server.go:279] https://192.168.50.254:8443/healthz returned 200:
	ok
	I1210 06:20:04.131884  270568 status.go:463] ha-387221 apiserver status = Running (err=<nil>)
	I1210 06:20:04.131898  270568 status.go:176] ha-387221 status: &{Name:ha-387221 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:20:04.131920  270568 status.go:174] checking status of ha-387221-m02 ...
	I1210 06:20:04.133675  270568 status.go:371] ha-387221-m02 host status = "Stopped" (err=<nil>)
	I1210 06:20:04.133701  270568 status.go:384] host is not running, skipping remaining checks
	I1210 06:20:04.133708  270568 status.go:176] ha-387221-m02 status: &{Name:ha-387221-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:20:04.133727  270568 status.go:174] checking status of ha-387221-m03 ...
	I1210 06:20:04.135209  270568 status.go:371] ha-387221-m03 host status = "Running" (err=<nil>)
	I1210 06:20:04.135230  270568 host.go:66] Checking if "ha-387221-m03" exists ...
	I1210 06:20:04.137856  270568 main.go:143] libmachine: domain ha-387221-m03 has defined MAC address 52:54:00:3a:f6:b0 in network mk-ha-387221
	I1210 06:20:04.138297  270568 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f6:b0", ip: ""} in network mk-ha-387221: {Iface:virbr2 ExpiryTime:2025-12-10 07:16:32 +0000 UTC Type:0 Mac:52:54:00:3a:f6:b0 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:ha-387221-m03 Clientid:01:52:54:00:3a:f6:b0}
	I1210 06:20:04.138332  270568 main.go:143] libmachine: domain ha-387221-m03 has defined IP address 192.168.50.216 and MAC address 52:54:00:3a:f6:b0 in network mk-ha-387221
	I1210 06:20:04.138460  270568 host.go:66] Checking if "ha-387221-m03" exists ...
	I1210 06:20:04.138677  270568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:20:04.140776  270568 main.go:143] libmachine: domain ha-387221-m03 has defined MAC address 52:54:00:3a:f6:b0 in network mk-ha-387221
	I1210 06:20:04.141162  270568 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3a:f6:b0", ip: ""} in network mk-ha-387221: {Iface:virbr2 ExpiryTime:2025-12-10 07:16:32 +0000 UTC Type:0 Mac:52:54:00:3a:f6:b0 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:ha-387221-m03 Clientid:01:52:54:00:3a:f6:b0}
	I1210 06:20:04.141182  270568 main.go:143] libmachine: domain ha-387221-m03 has defined IP address 192.168.50.216 and MAC address 52:54:00:3a:f6:b0 in network mk-ha-387221
	I1210 06:20:04.141327  270568 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/ha-387221-m03/id_rsa Username:docker}
	I1210 06:20:04.228907  270568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:20:04.251391  270568 kubeconfig.go:125] found "ha-387221" server: "https://192.168.50.254:8443"
	I1210 06:20:04.251424  270568 api_server.go:166] Checking apiserver status ...
	I1210 06:20:04.251469  270568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:20:04.276687  270568 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1842/cgroup
	W1210 06:20:04.292853  270568 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1842/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:20:04.292945  270568 ssh_runner.go:195] Run: ls
	I1210 06:20:04.300134  270568 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8443/healthz ...
	I1210 06:20:04.305551  270568 api_server.go:279] https://192.168.50.254:8443/healthz returned 200:
	ok
	I1210 06:20:04.305581  270568 status.go:463] ha-387221-m03 apiserver status = Running (err=<nil>)
	I1210 06:20:04.305590  270568 status.go:176] ha-387221-m03 status: &{Name:ha-387221-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:20:04.305608  270568 status.go:174] checking status of ha-387221-m04 ...
	I1210 06:20:04.307451  270568 status.go:371] ha-387221-m04 host status = "Running" (err=<nil>)
	I1210 06:20:04.307476  270568 host.go:66] Checking if "ha-387221-m04" exists ...
	I1210 06:20:04.310205  270568 main.go:143] libmachine: domain ha-387221-m04 has defined MAC address 52:54:00:db:af:ef in network mk-ha-387221
	I1210 06:20:04.310673  270568 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:db:af:ef", ip: ""} in network mk-ha-387221: {Iface:virbr2 ExpiryTime:2025-12-10 07:17:55 +0000 UTC Type:0 Mac:52:54:00:db:af:ef Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:ha-387221-m04 Clientid:01:52:54:00:db:af:ef}
	I1210 06:20:04.310708  270568 main.go:143] libmachine: domain ha-387221-m04 has defined IP address 192.168.50.151 and MAC address 52:54:00:db:af:ef in network mk-ha-387221
	I1210 06:20:04.310844  270568 host.go:66] Checking if "ha-387221-m04" exists ...
	I1210 06:20:04.311079  270568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:20:04.312970  270568 main.go:143] libmachine: domain ha-387221-m04 has defined MAC address 52:54:00:db:af:ef in network mk-ha-387221
	I1210 06:20:04.313353  270568 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:db:af:ef", ip: ""} in network mk-ha-387221: {Iface:virbr2 ExpiryTime:2025-12-10 07:17:55 +0000 UTC Type:0 Mac:52:54:00:db:af:ef Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:ha-387221-m04 Clientid:01:52:54:00:db:af:ef}
	I1210 06:20:04.313391  270568 main.go:143] libmachine: domain ha-387221-m04 has defined IP address 192.168.50.151 and MAC address 52:54:00:db:af:ef in network mk-ha-387221
	I1210 06:20:04.313518  270568 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/ha-387221-m04/id_rsa Username:docker}
	I1210 06:20:04.397000  270568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:20:04.417798  270568 status.go:176] ha-387221-m04 status: &{Name:ha-387221-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (90.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 node start m02 --alsologtostderr -v 5
E1210 06:20:35.849088  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-387221 node start m02 --alsologtostderr -v 5: (36.699085824s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-387221 status --alsologtostderr -v 5: (1.039394051s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1210 06:20:42.852490  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (377.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 stop --alsologtostderr -v 5
E1210 06:21:00.545476  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:22:51.984018  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:19.698100  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:19.790743  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-387221 stop --alsologtostderr -v 5: (4m4.096766054s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 start --wait true --alsologtostderr -v 5
E1210 06:26:00.545606  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-387221 start --wait true --alsologtostderr -v 5: (2m13.465163111s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (377.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-387221 node delete m03 --alsologtostderr -v 5: (17.521168483s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (251.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 stop --alsologtostderr -v 5
E1210 06:27:51.984751  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:29:19.793346  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:31:00.545055  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-387221 stop --alsologtostderr -v 5: (4m11.465429816s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-387221 status --alsologtostderr -v 5: exit status 7 (69.107299ms)

                                                
                                                
-- stdout --
	ha-387221
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-387221-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-387221-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:31:31.750922  274157 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:31:31.751030  274157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:31:31.751034  274157 out.go:374] Setting ErrFile to fd 2...
	I1210 06:31:31.751038  274157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:31:31.751215  274157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 06:31:31.751372  274157 out.go:368] Setting JSON to false
	I1210 06:31:31.751406  274157 mustload.go:66] Loading cluster: ha-387221
	I1210 06:31:31.751537  274157 notify.go:221] Checking for updates...
	I1210 06:31:31.751823  274157 config.go:182] Loaded profile config "ha-387221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:31:31.751840  274157 status.go:174] checking status of ha-387221 ...
	I1210 06:31:31.754282  274157 status.go:371] ha-387221 host status = "Stopped" (err=<nil>)
	I1210 06:31:31.754300  274157 status.go:384] host is not running, skipping remaining checks
	I1210 06:31:31.754306  274157 status.go:176] ha-387221 status: &{Name:ha-387221 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:31:31.754324  274157 status.go:174] checking status of ha-387221-m02 ...
	I1210 06:31:31.755534  274157 status.go:371] ha-387221-m02 host status = "Stopped" (err=<nil>)
	I1210 06:31:31.755549  274157 status.go:384] host is not running, skipping remaining checks
	I1210 06:31:31.755553  274157 status.go:176] ha-387221-m02 status: &{Name:ha-387221-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:31:31.755566  274157 status.go:174] checking status of ha-387221-m04 ...
	I1210 06:31:31.756762  274157 status.go:371] ha-387221-m04 host status = "Stopped" (err=<nil>)
	I1210 06:31:31.756776  274157 status.go:384] host is not running, skipping remaining checks
	I1210 06:31:31.756780  274157 status.go:176] ha-387221-m04 status: &{Name:ha-387221-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (251.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (121.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1210 06:32:51.984502  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-387221 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (2m1.217525331s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (121.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 node add --control-plane --alsologtostderr -v 5
E1210 06:34:15.062252  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:34:19.790482  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-387221 node add --control-plane --alsologtostderr -v 5: (1m22.259249319s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-387221 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (92.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-094164 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1210 06:35:43.624285  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:36:00.545408  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-094164 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m32.070859492s)
--- PASS: TestJSONOutput/start/Command (92.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-094164 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-094164 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-094164 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-094164 --output=json --user=testUser: (7.019258057s)
--- PASS: TestJSONOutput/stop/Command (7.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-675502 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-675502 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (79.997958ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"25e95dcf-7fdd-4554-bff9-313656c2679d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-675502] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"54ef7cd3-1c5c-4c39-97f8-fae958101a19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22094"}}
	{"specversion":"1.0","id":"56af6bd0-fe35-4482-9d8c-b7083fd0205b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ab483f23-00ca-485f-8fd5-2d4197fafd32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig"}}
	{"specversion":"1.0","id":"d40f6aef-2981-46fa-8cc2-971024579307","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube"}}
	{"specversion":"1.0","id":"2afa39bf-22eb-40a1-8a16-3ce6e54e2d9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c12dd86f-b0a8-477c-b85e-9f90d50987c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"31dec24c-adee-42c5-924d-0ad3af838cdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-675502" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-675502
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (110.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-295117 --driver=kvm2  --container-runtime=crio
E1210 06:37:22.854036  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-295117 --driver=kvm2  --container-runtime=crio: (52.966583764s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-297925 --driver=kvm2  --container-runtime=crio
E1210 06:37:51.984800  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-297925 --driver=kvm2  --container-runtime=crio: (54.508224424s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-295117
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-297925
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-297925" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-297925
helpers_test.go:176: Cleaning up "first-295117" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-295117
--- PASS: TestMinikubeProfile (110.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-081731 --memory=3072 --mount-string /tmp/TestMountStartserial1277270551/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-081731 --memory=3072 --mount-string /tmp/TestMountStartserial1277270551/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.311846946s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-081731 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-081731 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-103897 --memory=3072 --mount-string /tmp/TestMountStartserial1277270551/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-103897 --memory=3072 --mount-string /tmp/TestMountStartserial1277270551/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.148402208s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-103897 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-103897 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-081731 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-103897 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-103897 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-103897
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-103897: (1.335400534s)
--- PASS: TestMountStart/serial/Stop (1.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-103897
E1210 06:39:19.789951  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-103897: (19.995500636s)
--- PASS: TestMountStart/serial/RestartStopped (21.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-103897 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-103897 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-214512 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1210 06:41:00.544887  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-214512 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.457186671s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-214512 -- rollout status deployment/busybox: (3.360035019s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- exec busybox-7b57f96db7-65cjw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- exec busybox-7b57f96db7-bjjn7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- exec busybox-7b57f96db7-65cjw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- exec busybox-7b57f96db7-bjjn7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- exec busybox-7b57f96db7-65cjw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- exec busybox-7b57f96db7-bjjn7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- exec busybox-7b57f96db7-65cjw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- exec busybox-7b57f96db7-65cjw -- sh -c "ping -c 1 192.168.50.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- exec busybox-7b57f96db7-bjjn7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-214512 -- exec busybox-7b57f96db7-bjjn7 -- sh -c "ping -c 1 192.168.50.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-214512 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-214512 -v=5 --alsologtostderr: (42.927553922s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.41s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-214512 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 cp testdata/cp-test.txt multinode-214512:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 cp multinode-214512:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3457275295/001/cp-test_multinode-214512.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 cp multinode-214512:/home/docker/cp-test.txt multinode-214512-m02:/home/docker/cp-test_multinode-214512_multinode-214512-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512-m02 "sudo cat /home/docker/cp-test_multinode-214512_multinode-214512-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 cp multinode-214512:/home/docker/cp-test.txt multinode-214512-m03:/home/docker/cp-test_multinode-214512_multinode-214512-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512-m03 "sudo cat /home/docker/cp-test_multinode-214512_multinode-214512-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 cp testdata/cp-test.txt multinode-214512-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 cp multinode-214512-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3457275295/001/cp-test_multinode-214512-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 cp multinode-214512-m02:/home/docker/cp-test.txt multinode-214512:/home/docker/cp-test_multinode-214512-m02_multinode-214512.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512 "sudo cat /home/docker/cp-test_multinode-214512-m02_multinode-214512.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 cp multinode-214512-m02:/home/docker/cp-test.txt multinode-214512-m03:/home/docker/cp-test_multinode-214512-m02_multinode-214512-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512-m03 "sudo cat /home/docker/cp-test_multinode-214512-m02_multinode-214512-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 cp testdata/cp-test.txt multinode-214512-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 cp multinode-214512-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3457275295/001/cp-test_multinode-214512-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 cp multinode-214512-m03:/home/docker/cp-test.txt multinode-214512:/home/docker/cp-test_multinode-214512-m03_multinode-214512.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512 "sudo cat /home/docker/cp-test_multinode-214512-m03_multinode-214512.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 cp multinode-214512-m03:/home/docker/cp-test.txt multinode-214512-m02:/home/docker/cp-test_multinode-214512-m03_multinode-214512-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 ssh -n multinode-214512-m02 "sudo cat /home/docker/cp-test_multinode-214512-m03_multinode-214512-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-214512 node stop m03: (1.729409174s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-214512 status: exit status 7 (346.641764ms)

                                                
                                                
-- stdout --
	multinode-214512
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-214512-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-214512-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-214512 status --alsologtostderr: exit status 7 (346.400554ms)

                                                
                                                
-- stdout --
	multinode-214512
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-214512-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-214512-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:42:31.384527  280622 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:42:31.384646  280622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:42:31.384653  280622 out.go:374] Setting ErrFile to fd 2...
	I1210 06:42:31.384663  280622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:42:31.384863  280622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 06:42:31.385079  280622 out.go:368] Setting JSON to false
	I1210 06:42:31.385112  280622 mustload.go:66] Loading cluster: multinode-214512
	I1210 06:42:31.385162  280622 notify.go:221] Checking for updates...
	I1210 06:42:31.385517  280622 config.go:182] Loaded profile config "multinode-214512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:42:31.385535  280622 status.go:174] checking status of multinode-214512 ...
	I1210 06:42:31.387574  280622 status.go:371] multinode-214512 host status = "Running" (err=<nil>)
	I1210 06:42:31.387592  280622 host.go:66] Checking if "multinode-214512" exists ...
	I1210 06:42:31.390299  280622 main.go:143] libmachine: domain multinode-214512 has defined MAC address 52:54:00:b8:bb:ab in network mk-multinode-214512
	I1210 06:42:31.390826  280622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:bb:ab", ip: ""} in network mk-multinode-214512: {Iface:virbr2 ExpiryTime:2025-12-10 07:39:54 +0000 UTC Type:0 Mac:52:54:00:b8:bb:ab Iaid: IPaddr:192.168.50.212 Prefix:24 Hostname:multinode-214512 Clientid:01:52:54:00:b8:bb:ab}
	I1210 06:42:31.390865  280622 main.go:143] libmachine: domain multinode-214512 has defined IP address 192.168.50.212 and MAC address 52:54:00:b8:bb:ab in network mk-multinode-214512
	I1210 06:42:31.391145  280622 host.go:66] Checking if "multinode-214512" exists ...
	I1210 06:42:31.391428  280622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:42:31.394138  280622 main.go:143] libmachine: domain multinode-214512 has defined MAC address 52:54:00:b8:bb:ab in network mk-multinode-214512
	I1210 06:42:31.394555  280622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:bb:ab", ip: ""} in network mk-multinode-214512: {Iface:virbr2 ExpiryTime:2025-12-10 07:39:54 +0000 UTC Type:0 Mac:52:54:00:b8:bb:ab Iaid: IPaddr:192.168.50.212 Prefix:24 Hostname:multinode-214512 Clientid:01:52:54:00:b8:bb:ab}
	I1210 06:42:31.394579  280622 main.go:143] libmachine: domain multinode-214512 has defined IP address 192.168.50.212 and MAC address 52:54:00:b8:bb:ab in network mk-multinode-214512
	I1210 06:42:31.394734  280622 sshutil.go:53] new ssh client: &{IP:192.168.50.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/multinode-214512/id_rsa Username:docker}
	I1210 06:42:31.483649  280622 ssh_runner.go:195] Run: systemctl --version
	I1210 06:42:31.490400  280622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:42:31.509178  280622 kubeconfig.go:125] found "multinode-214512" server: "https://192.168.50.212:8443"
	I1210 06:42:31.509221  280622 api_server.go:166] Checking apiserver status ...
	I1210 06:42:31.509266  280622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:31.529303  280622 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2167/cgroup
	W1210 06:42:31.542127  280622 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:42:31.542232  280622 ssh_runner.go:195] Run: ls
	I1210 06:42:31.547863  280622 api_server.go:253] Checking apiserver healthz at https://192.168.50.212:8443/healthz ...
	I1210 06:42:31.554744  280622 api_server.go:279] https://192.168.50.212:8443/healthz returned 200:
	ok
	I1210 06:42:31.554772  280622 status.go:463] multinode-214512 apiserver status = Running (err=<nil>)
	I1210 06:42:31.554782  280622 status.go:176] multinode-214512 status: &{Name:multinode-214512 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:42:31.554809  280622 status.go:174] checking status of multinode-214512-m02 ...
	I1210 06:42:31.556513  280622 status.go:371] multinode-214512-m02 host status = "Running" (err=<nil>)
	I1210 06:42:31.556537  280622 host.go:66] Checking if "multinode-214512-m02" exists ...
	I1210 06:42:31.559489  280622 main.go:143] libmachine: domain multinode-214512-m02 has defined MAC address 52:54:00:bf:f2:20 in network mk-multinode-214512
	I1210 06:42:31.559939  280622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bf:f2:20", ip: ""} in network mk-multinode-214512: {Iface:virbr2 ExpiryTime:2025-12-10 07:41:05 +0000 UTC Type:0 Mac:52:54:00:bf:f2:20 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:multinode-214512-m02 Clientid:01:52:54:00:bf:f2:20}
	I1210 06:42:31.559967  280622 main.go:143] libmachine: domain multinode-214512-m02 has defined IP address 192.168.50.197 and MAC address 52:54:00:bf:f2:20 in network mk-multinode-214512
	I1210 06:42:31.560112  280622 host.go:66] Checking if "multinode-214512-m02" exists ...
	I1210 06:42:31.560336  280622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:42:31.562691  280622 main.go:143] libmachine: domain multinode-214512-m02 has defined MAC address 52:54:00:bf:f2:20 in network mk-multinode-214512
	I1210 06:42:31.563127  280622 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bf:f2:20", ip: ""} in network mk-multinode-214512: {Iface:virbr2 ExpiryTime:2025-12-10 07:41:05 +0000 UTC Type:0 Mac:52:54:00:bf:f2:20 Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:multinode-214512-m02 Clientid:01:52:54:00:bf:f2:20}
	I1210 06:42:31.563148  280622 main.go:143] libmachine: domain multinode-214512-m02 has defined IP address 192.168.50.197 and MAC address 52:54:00:bf:f2:20 in network mk-multinode-214512
	I1210 06:42:31.563326  280622 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22094-243461/.minikube/machines/multinode-214512-m02/id_rsa Username:docker}
	I1210 06:42:31.648604  280622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:42:31.665664  280622 status.go:176] multinode-214512-m02 status: &{Name:multinode-214512-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:42:31.665702  280622 status.go:174] checking status of multinode-214512-m03 ...
	I1210 06:42:31.667618  280622 status.go:371] multinode-214512-m03 host status = "Stopped" (err=<nil>)
	I1210 06:42:31.667642  280622 status.go:384] host is not running, skipping remaining checks
	I1210 06:42:31.667649  280622 status.go:176] multinode-214512-m03 status: &{Name:multinode-214512-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 node start m03 -v=5 --alsologtostderr
E1210 06:42:51.984126  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-214512 node start m03 -v=5 --alsologtostderr: (38.492215952s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (317.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-214512
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-214512
E1210 06:44:19.790867  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-214512: (2m40.055179258s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-214512 --wait=true -v=5 --alsologtostderr
E1210 06:46:00.545278  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:47:51.984678  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-214512 --wait=true -v=5 --alsologtostderr: (2m37.03520038s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-214512
--- PASS: TestMultiNode/serial/RestartKeepsNodes (317.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-214512 node delete m03: (2.17207978s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (176.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 stop
E1210 06:49:19.786984  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:50:55.066506  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:51:00.544698  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-214512 stop: (2m56.357766483s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-214512 status: exit status 7 (67.688571ms)

                                                
                                                
-- stdout --
	multinode-214512
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-214512-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-214512 status --alsologtostderr: exit status 7 (65.920709ms)

                                                
                                                
-- stdout --
	multinode-214512
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-214512-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:51:27.044366  283308 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:51:27.044608  283308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:51:27.044616  283308 out.go:374] Setting ErrFile to fd 2...
	I1210 06:51:27.044621  283308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:51:27.044820  283308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 06:51:27.045021  283308 out.go:368] Setting JSON to false
	I1210 06:51:27.045057  283308 mustload.go:66] Loading cluster: multinode-214512
	I1210 06:51:27.045179  283308 notify.go:221] Checking for updates...
	I1210 06:51:27.045384  283308 config.go:182] Loaded profile config "multinode-214512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:51:27.045404  283308 status.go:174] checking status of multinode-214512 ...
	I1210 06:51:27.047756  283308 status.go:371] multinode-214512 host status = "Stopped" (err=<nil>)
	I1210 06:51:27.047778  283308 status.go:384] host is not running, skipping remaining checks
	I1210 06:51:27.047807  283308 status.go:176] multinode-214512 status: &{Name:multinode-214512 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:51:27.047848  283308 status.go:174] checking status of multinode-214512-m02 ...
	I1210 06:51:27.049483  283308 status.go:371] multinode-214512-m02 host status = "Stopped" (err=<nil>)
	I1210 06:51:27.049503  283308 status.go:384] host is not running, skipping remaining checks
	I1210 06:51:27.049511  283308 status.go:176] multinode-214512-m02 status: &{Name:multinode-214512-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (176.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (113.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-214512 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1210 06:52:23.625762  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:52:51.983829  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-214512 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.070538485s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-214512 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (113.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (55.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-214512
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-214512-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-214512-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (80.893707ms)

                                                
                                                
-- stdout --
	* [multinode-214512-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-214512-m02' is duplicated with machine name 'multinode-214512-m02' in profile 'multinode-214512'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-214512-m03 --driver=kvm2  --container-runtime=crio
E1210 06:54:02.855540  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-214512-m03 --driver=kvm2  --container-runtime=crio: (54.327839757s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-214512
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-214512: exit status 80 (226.824398ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-214512 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-214512-m03 already exists in multinode-214512-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-214512-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (55.52s)

                                                
                                    
x
+
TestPreload (163.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-934316 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1210 06:54:19.786729  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-934316 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m33.396803474s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-934316 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-934316 image pull gcr.io/k8s-minikube/busybox: (2.407807337s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-934316
E1210 06:56:00.545612  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-934316: (8.639912574s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-934316 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-934316 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (57.696023616s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-934316 image list
helpers_test.go:176: Cleaning up "test-preload-934316" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-934316
--- PASS: TestPreload (163.21s)

                                                
                                    
x
+
TestScheduledStopUnix (125.03s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-824530 --memory=3072 --driver=kvm2  --container-runtime=crio
E1210 06:57:51.984270  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-824530 --memory=3072 --driver=kvm2  --container-runtime=crio: (53.334421279s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-824530 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:57:54.472770  286211 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:57:54.473189  286211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:57:54.473198  286211 out.go:374] Setting ErrFile to fd 2...
	I1210 06:57:54.473202  286211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:57:54.473383  286211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 06:57:54.473625  286211 out.go:368] Setting JSON to false
	I1210 06:57:54.473707  286211 mustload.go:66] Loading cluster: scheduled-stop-824530
	I1210 06:57:54.474032  286211 config.go:182] Loaded profile config "scheduled-stop-824530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:57:54.474096  286211 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/config.json ...
	I1210 06:57:54.474275  286211 mustload.go:66] Loading cluster: scheduled-stop-824530
	I1210 06:57:54.474385  286211 config.go:182] Loaded profile config "scheduled-stop-824530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-824530 -n scheduled-stop-824530
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-824530 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:57:54.773233  286257 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:57:54.773460  286257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:57:54.773469  286257 out.go:374] Setting ErrFile to fd 2...
	I1210 06:57:54.773473  286257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:57:54.773665  286257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 06:57:54.773909  286257 out.go:368] Setting JSON to false
	I1210 06:57:54.774090  286257 daemonize_unix.go:73] killing process 286246 as it is an old scheduled stop
	I1210 06:57:54.774191  286257 mustload.go:66] Loading cluster: scheduled-stop-824530
	I1210 06:57:54.774613  286257 config.go:182] Loaded profile config "scheduled-stop-824530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:57:54.774710  286257 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/config.json ...
	I1210 06:57:54.774941  286257 mustload.go:66] Loading cluster: scheduled-stop-824530
	I1210 06:57:54.775078  286257 config.go:182] Loaded profile config "scheduled-stop-824530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1210 06:57:54.780696  247366 retry.go:31] will retry after 104.951µs: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.781943  247366 retry.go:31] will retry after 86.298µs: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.783112  247366 retry.go:31] will retry after 209.566µs: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.784284  247366 retry.go:31] will retry after 204.092µs: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.785447  247366 retry.go:31] will retry after 591.861µs: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.786603  247366 retry.go:31] will retry after 415.099µs: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.787746  247366 retry.go:31] will retry after 1.544409ms: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.789935  247366 retry.go:31] will retry after 2.040863ms: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.792080  247366 retry.go:31] will retry after 1.795655ms: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.794282  247366 retry.go:31] will retry after 3.302453ms: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.798528  247366 retry.go:31] will retry after 4.740448ms: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.803764  247366 retry.go:31] will retry after 9.951688ms: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.814033  247366 retry.go:31] will retry after 17.388697ms: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.832311  247366 retry.go:31] will retry after 28.768536ms: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
I1210 06:57:54.861585  247366 retry.go:31] will retry after 28.460282ms: open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-824530 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-824530 -n scheduled-stop-824530
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-824530
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-824530 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:58:20.487469  286405 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:58:20.487754  286405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:58:20.487767  286405 out.go:374] Setting ErrFile to fd 2...
	I1210 06:58:20.487771  286405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:58:20.488027  286405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 06:58:20.488346  286405 out.go:368] Setting JSON to false
	I1210 06:58:20.488446  286405 mustload.go:66] Loading cluster: scheduled-stop-824530
	I1210 06:58:20.488780  286405 config.go:182] Loaded profile config "scheduled-stop-824530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:58:20.488854  286405 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/scheduled-stop-824530/config.json ...
	I1210 06:58:20.489086  286405 mustload.go:66] Loading cluster: scheduled-stop-824530
	I1210 06:58:20.489212  286405 config.go:182] Loaded profile config "scheduled-stop-824530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-824530
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-824530: exit status 7 (64.541788ms)

                                                
                                                
-- stdout --
	scheduled-stop-824530
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-824530 -n scheduled-stop-824530
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-824530 -n scheduled-stop-824530: exit status 7 (62.212991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-824530" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-824530
--- PASS: TestScheduledStopUnix (125.03s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (131.33s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3600810181 start -p running-upgrade-511706 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3600810181 start -p running-upgrade-511706 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (46.907778412s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-511706 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-511706 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.938729628s)
helpers_test.go:176: Cleaning up "running-upgrade-511706" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-511706
--- PASS: TestRunningBinaryUpgrade (131.33s)

                                                
                                    
x
+
TestKubernetesUpgrade (182.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-300412 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-300412 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.303677599s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-300412
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-300412: (2.228421421s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-300412 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-300412 status --format={{.Host}}: exit status 7 (79.379578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-300412 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-300412 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.235485518s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-300412 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-300412 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-300412 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (87.109347ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-300412] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-300412
	    minikube start -p kubernetes-upgrade-300412 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3004122 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-300412 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-300412 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-300412 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.535256192s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-300412" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-300412
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-300412: (1.004784786s)
--- PASS: TestKubernetesUpgrade (182.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-155295 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-155295 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (98.781858ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-155295] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (78.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-155295 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-155295 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m17.842149902s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-155295 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (78.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (139.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2557306920 start -p stopped-upgrade-411663 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1210 06:59:19.787532  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2557306920 start -p stopped-upgrade-411663 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m40.226555614s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2557306920 -p stopped-upgrade-411663 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2557306920 -p stopped-upgrade-411663 stop: (1.978665619s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-411663 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-411663 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (37.054865266s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (139.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (31.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-155295 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-155295 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (30.359016792s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-155295 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-155295 status -o json: exit status 2 (250.069347ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-155295","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-155295
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (31.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (34.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-155295 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1210 07:01:00.545460  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-155295 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (34.837318732s)
--- PASS: TestNoKubernetes/serial/Start (34.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-411663
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-411663: (1.255036908s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22094-243461/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-155295 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-155295 "sudo systemctl is-active --quiet service kubelet": exit status 1 (164.275326ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-155295
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-155295: (1.364736266s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (63.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-155295 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-155295 --driver=kvm2  --container-runtime=crio: (1m3.423826713s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (63.42s)

                                                
                                    
x
+
TestPause/serial/Start (124.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-179913 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-179913 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m4.718262801s)
--- PASS: TestPause/serial/Start (124.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-155295 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-155295 "sudo systemctl is-active --quiet service kubelet": exit status 1 (187.065733ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-714139 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-714139 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (121.730503ms)

                                                
                                                
-- stdout --
	* [false-714139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:02:42.064736  290746 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:02:42.065045  290746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:02:42.065056  290746 out.go:374] Setting ErrFile to fd 2...
	I1210 07:02:42.065062  290746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:02:42.065300  290746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-243461/.minikube/bin
	I1210 07:02:42.065791  290746 out.go:368] Setting JSON to false
	I1210 07:02:42.066730  290746 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31509,"bootTime":1765318653,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 07:02:42.066790  290746 start.go:143] virtualization: kvm guest
	I1210 07:02:42.069143  290746 out.go:179] * [false-714139] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 07:02:42.070593  290746 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:02:42.070608  290746 notify.go:221] Checking for updates...
	I1210 07:02:42.073287  290746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:02:42.074972  290746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-243461/kubeconfig
	I1210 07:02:42.076380  290746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-243461/.minikube
	I1210 07:02:42.077663  290746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 07:02:42.078804  290746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:02:42.080371  290746 config.go:182] Loaded profile config "force-systemd-env-909953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:02:42.080468  290746 config.go:182] Loaded profile config "pause-179913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:02:42.080540  290746 config.go:182] Loaded profile config "running-upgrade-511706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1210 07:02:42.080631  290746 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:02:42.117364  290746 out.go:179] * Using the kvm2 driver based on user configuration
	I1210 07:02:42.118501  290746 start.go:309] selected driver: kvm2
	I1210 07:02:42.118515  290746 start.go:927] validating driver "kvm2" against <nil>
	I1210 07:02:42.118527  290746 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:02:42.120799  290746 out.go:203] 
	W1210 07:02:42.122151  290746 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1210 07:02:42.123365  290746 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-714139 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-714139

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-714139

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-714139

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-714139

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-714139

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-714139

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-714139

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-714139

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-714139

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-714139

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-714139

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-714139" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-714139" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-714139

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714139"

                                                
                                                
----------------------- debugLogs end: false-714139 [took: 3.863577973s] --------------------------------
helpers_test.go:176: Cleaning up "false-714139" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-714139
--- PASS: TestNetworkPlugins/group/false (4.20s)

                                                
                                    
x
+
TestISOImage/Setup (35.73s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-539425 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1210 07:02:51.984107  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-539425 --no-kubernetes --driver=kvm2  --container-runtime=crio: (35.730042874s)
--- PASS: TestISOImage/Setup (35.73s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.3s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.30s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (96.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-508835 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-508835 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m36.66008969s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (96.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (93.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-548860 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-548860 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m33.982679735s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (93.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (100.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-598869 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-598869 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m40.661137353s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (100.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-508835 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [4fe4f7de-68f3-4acb-9676-ad6fd3c70d84] Pending
helpers_test.go:353: "busybox" [4fe4f7de-68f3-4acb-9676-ad6fd3c70d84] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [4fe4f7de-68f3-4acb-9676-ad6fd3c70d84] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004657369s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-508835 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-508835 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-508835 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.196475337s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-508835 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (82.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-508835 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-508835 --alsologtostderr -v=3: (1m22.573552363s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (82.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-548860 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a17beaa7-8aac-4ead-806c-70d62590f4a4] Pending
helpers_test.go:353: "busybox" [a17beaa7-8aac-4ead-806c-70d62590f4a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a17beaa7-8aac-4ead-806c-70d62590f4a4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004796514s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-548860 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-548860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-548860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032469323s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-548860 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (87.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-548860 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-548860 --alsologtostderr -v=3: (1m27.833016411s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (87.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-598869 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [faee6826-b7fe-47a9-84f4-40909bbd3ff0] Pending
helpers_test.go:353: "busybox" [faee6826-b7fe-47a9-84f4-40909bbd3ff0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [faee6826-b7fe-47a9-84f4-40909bbd3ff0] Running
E1210 07:07:51.983989  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004710585s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-598869 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-598869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-598869 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (71.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-598869 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-598869 --alsologtostderr -v=3: (1m11.956216789s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (71.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-508835 -n old-k8s-version-508835
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-508835 -n old-k8s-version-508835: exit status 7 (83.81723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-508835 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-508835 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-508835 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (48.347912759s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-508835 -n old-k8s-version-508835
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (111.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-206625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-206625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m51.973647212s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (111.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-548860 -n no-preload-548860
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-548860 -n no-preload-548860: exit status 7 (80.783734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-548860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (66.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-548860 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-548860 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m5.920387251s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-548860 -n no-preload-548860
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (66.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-k5zfw" [02f6a0fd-9978-4881-ae3f-cc59a0b936e2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-k5zfw" [02f6a0fd-9978-4881-ae3f-cc59a0b936e2] Running
E1210 07:09:03.627540  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.005227603s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-k5zfw" [02f6a0fd-9978-4881-ae3f-cc59a0b936e2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006091271s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-508835 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-598869 -n embed-certs-598869
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-598869 -n embed-certs-598869: exit status 7 (73.767709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-598869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (67.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-598869 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-598869 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m6.684239404s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-598869 -n embed-certs-598869
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (67.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-508835 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-508835 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-508835 -n old-k8s-version-508835
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-508835 -n old-k8s-version-508835: exit status 2 (262.011678ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-508835 -n old-k8s-version-508835
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-508835 -n old-k8s-version-508835: exit status 2 (264.322818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-508835 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-508835 -n old-k8s-version-508835
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-508835 -n old-k8s-version-508835
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-672239 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1210 07:09:19.787070  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-672239 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m0.849326043s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-57654" [01ba258b-70bf-48ea-a470-816a2824ae0b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005564074s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-57654" [01ba258b-70bf-48ea-a470-816a2824ae0b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006394325s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-548860 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-548860 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I1210 07:09:49.265210  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
I1210 07:09:49.436808  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
I1210 07:09:49.609862  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-548860 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-548860 --alsologtostderr -v=1: (1.012367001s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-548860 -n no-preload-548860
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-548860 -n no-preload-548860: exit status 2 (250.956595ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-548860 -n no-preload-548860
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-548860 -n no-preload-548860: exit status 2 (245.352088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-548860 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-548860 -n no-preload-548860
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-548860 -n no-preload-548860
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (99.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m39.544785297s)
--- PASS: TestNetworkPlugins/group/auto/Start (99.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-206625 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [dc5211de-ad38-421b-83ed-f05156744cd7] Pending
helpers_test.go:353: "busybox" [dc5211de-ad38-421b-83ed-f05156744cd7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [dc5211de-ad38-421b-83ed-f05156744cd7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.007827347s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-206625 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-206625 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-206625 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.270742013s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-206625 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (71.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-206625 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-206625 --alsologtostderr -v=3: (1m11.706706641s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (71.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-q7c2s" [8b6d97f5-9460-4cdb-a118-8ee992176ea2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-q7c2s" [8b6d97f5-9460-4cdb-a118-8ee992176ea2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.00561585s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-672239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-672239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.48863493s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-672239 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-672239 --alsologtostderr -v=3: (8.939875389s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-q7c2s" [8b6d97f5-9460-4cdb-a118-8ee992176ea2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011486429s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-598869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-672239 -n newest-cni-672239
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-672239 -n newest-cni-672239: exit status 7 (71.713452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-672239 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-672239 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-672239 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (37.80691767s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-672239 -n newest-cni-672239
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-598869 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I1210 07:10:31.685229  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
I1210 07:10:31.864961  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
I1210 07:10:32.038400  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-598869 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-598869 -n embed-certs-598869
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-598869 -n embed-certs-598869: exit status 2 (261.56571ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-598869 -n embed-certs-598869
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-598869 -n embed-certs-598869: exit status 2 (261.699664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-598869 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-598869 -n embed-certs-598869
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-598869 -n embed-certs-598869
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1210 07:10:42.857935  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:11:00.545162  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/addons-819501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m29.80909876s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-672239 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
I1210 07:11:05.657421  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
I1210 07:11:05.829491  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
I1210 07:11:06.007075  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-672239 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-672239 --alsologtostderr -v=1: (1.48393632s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-672239 -n newest-cni-672239
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-672239 -n newest-cni-672239: exit status 2 (308.262347ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-672239 -n newest-cni-672239
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-672239 -n newest-cni-672239: exit status 2 (322.866932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-672239 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-672239 --alsologtostderr -v=1: (1.063106467s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-672239 -n newest-cni-672239
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-672239 -n newest-cni-672239
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (92.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m32.193671523s)
--- PASS: TestNetworkPlugins/group/calico/Start (92.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-206625 -n default-k8s-diff-port-206625
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-206625 -n default-k8s-diff-port-206625: exit status 7 (85.270562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-206625 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (75.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-206625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
E1210 07:11:26.654774  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:11:26.661240  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:11:26.672726  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:11:26.694416  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:11:26.736007  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:11:26.818011  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:11:26.979477  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:11:27.301498  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:11:27.943922  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:11:29.225669  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:11:31.787760  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-206625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m14.86015773s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-206625 -n default-k8s-diff-port-206625
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (75.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-714139 "pgrep -a kubelet"
I1210 07:11:34.117149  247366 config.go:182] Loaded profile config "auto-714139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-714139 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tv7sd" [fd4f1361-4fc8-47d7-b888-0aa6c0cfcb1b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 07:11:36.909395  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-tv7sd" [fd4f1361-4fc8-47d7-b888-0aa6c0cfcb1b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.0062845s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-714139 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (92.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m32.812205829s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (92.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-j522z" [a6541336-34a1-460c-a463-ebee3f02fe7f] Running
E1210 07:12:07.633259  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:12:10.777392  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.011287751s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-714139 "pgrep -a kubelet"
I1210 07:12:12.585320  247366 config.go:182] Loaded profile config "kindnet-714139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-714139 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-n5w58" [0aca2a77-f9f4-4ab4-b1e7-1c8fc8828ada] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-n5w58" [0aca2a77-f9f4-4ab4-b1e7-1c8fc8828ada] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005032444s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-714139 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5sv9z" [5f6d856c-9cff-4ead-b375-6f05b7e97b73] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5sv9z" [5f6d856c-9cff-4ead-b375-6f05b7e97b73] Running
E1210 07:12:48.594900  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.005106508s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m40.566145741s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-5tvnt" [63f034bc-3c36-4542-94e2-6a27501328f8] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-5tvnt" [63f034bc-3c36-4542-94e2-6a27501328f8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006431343s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-714139 "pgrep -a kubelet"
I1210 07:12:49.965855  247366 config.go:182] Loaded profile config "calico-714139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-714139 replace --force -f testdata/netcat-deployment.yaml
I1210 07:12:50.844017  247366 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1210 07:12:50.872592  247366 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-46zg4" [1fd65b5e-ce42-405a-8c41-73e5670a328b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 07:12:51.983871  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-46zg4" [1fd65b5e-ce42-405a-8c41-73e5670a328b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004927428s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5sv9z" [5f6d856c-9cff-4ead-b375-6f05b7e97b73] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004560785s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-206625 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-206625 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I1210 07:12:59.782523  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
I1210 07:12:59.956316  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
I1210 07:13:00.119321  247366 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-206625 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-206625 -n default-k8s-diff-port-206625
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-206625 -n default-k8s-diff-port-206625: exit status 2 (256.244625ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-206625 -n default-k8s-diff-port-206625
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-206625 -n default-k8s-diff-port-206625: exit status 2 (278.625029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-206625 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-206625 -n default-k8s-diff-port-206625
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-206625 -n default-k8s-diff-port-206625
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-714139 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (87.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1210 07:13:12.221238  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m27.023624624s)
--- PASS: TestNetworkPlugins/group/flannel/Start (87.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (103.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-714139 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m43.724107084s)
--- PASS: TestNetworkPlugins/group/bridge/Start (103.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-714139 "pgrep -a kubelet"
I1210 07:13:35.554480  247366 config.go:182] Loaded profile config "custom-flannel-714139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-714139 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-z6kl8" [8dc00c4a-80dd-473c-8e5a-46f1cc8080d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-z6kl8" [8dc00c4a-80dd-473c-8e5a-46f1cc8080d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.007379713s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-714139 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.26s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.26s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.19s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.19s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   kicbase_version: v0.0.48-1764843390-22032
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 0d7c1d9864cc7aa82e32494e32331ce8be405026
iso_test.go:118:   iso_version: v1.37.0-1765151505-21409
--- PASS: TestISOImage/VersionJSON (0.19s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.18s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-539425 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.18s)
E1210 07:14:10.516747  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/old-k8s-version-508835/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:14:19.787318  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/functional-399479/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-714139 "pgrep -a kubelet"
I1210 07:14:23.264905  247366 config.go:182] Loaded profile config "enable-default-cni-714139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-714139 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-dxdwj" [23c8b744-6791-495e-81aa-460372c5b8d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-dxdwj" [23c8b744-6791-495e-81aa-460372c5b8d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004478507s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-968jd" [f61dfa14-b3aa-49f3-898f-84a5b24e9697] Running
E1210 07:14:34.143039  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/no-preload-548860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004132643s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-714139 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-714139 "pgrep -a kubelet"
I1210 07:14:38.054958  247366 config.go:182] Loaded profile config "flannel-714139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-714139 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-xngg2" [df9fd4e1-b0f5-4481-8405-e898b349d030] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-xngg2" [df9fd4e1-b0f5-4481-8405-e898b349d030] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005999905s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-714139 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-714139 "pgrep -a kubelet"
I1210 07:15:06.364273  247366 config.go:182] Loaded profile config "bridge-714139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-714139 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-86zwn" [95d96841-c5aa-4a81-b752-4f248914f85a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 07:15:10.925268  247366 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-243461/.minikube/profiles/default-k8s-diff-port-206625/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-86zwn" [95d96841-c5aa-4a81-b752-4f248914f85a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00411621s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-714139 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-714139 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (51/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
13 TestDownloadOnly/v1.34.3/preload-exists 0.06
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.32
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0.01
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
373 TestStartStop/group/disable-driver-mounts 0.18
380 TestNetworkPlugins/group/kubenet 3.91
388 TestNetworkPlugins/group/cilium 5
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1210 05:28:45.931312  247366 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
W1210 05:28:45.977540  247366 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
W1210 05:28:45.991024  247366 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.34.3/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-819501 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-794678" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-794678
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-714139 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-714139

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-714139

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-714139

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-714139

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-714139

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-714139

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-714139

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-714139

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-714139

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-714139

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-714139

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-714139" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-714139" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-714139

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714139"

                                                
                                                
----------------------- debugLogs end: kubenet-714139 [took: 3.727134621s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-714139" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-714139
--- SKIP: TestNetworkPlugins/group/kubenet (3.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-714139 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-714139" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-714139

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-714139" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714139"

                                                
                                                
----------------------- debugLogs end: cilium-714139 [took: 4.819041818s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-714139" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-714139
--- SKIP: TestNetworkPlugins/group/cilium (5.00s)

                                                
                                    
Copied to clipboard